Test Report: KVM_Linux_crio 21918

                    
                      08454a179ffa60c8ae500105aac58654b5cdef58:2025-11-19:42399
                    
                

Test fail (39/190)

Order failed test Duration
37 TestAddons/parallel/Ingress 157.41
69 TestFunctional/serial/SoftStart 1577.04
71 TestFunctional/serial/KubectlGetPods 742.7
74 TestFunctional/serial/CacheCmd/cache/add_remote 1.64
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0
77 TestFunctional/serial/CacheCmd/cache/list 0
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0
79 TestFunctional/serial/CacheCmd/cache/cache_reload 0
80 TestFunctional/serial/CacheCmd/cache/delete 0
81 TestFunctional/parallel 0
100 TestMultiControlPlane/serial/RestartClusterKeepsNodes 663.18
101 TestMultiControlPlane/serial/DeleteSecondaryNode 1.56
102 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.52
103 TestMultiControlPlane/serial/StopCluster 7.39
104 TestMultiControlPlane/serial/RestartCluster 120.49
105 TestMultiControlPlane/serial/DegradedAfterClusterRestart 3.27
106 TestMultiControlPlane/serial/AddSecondaryNode 81.39
107 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 3.49
173 TestPreload 159.28
205 TestISOImage/Binaries/crictl 0
206 TestISOImage/Binaries/curl 0
207 TestISOImage/Binaries/docker 0
208 TestISOImage/Binaries/git 0
209 TestISOImage/Binaries/iptables 0
210 TestISOImage/Binaries/podman 0
211 TestISOImage/Binaries/rsync 0
212 TestISOImage/Binaries/socat 0
213 TestISOImage/Binaries/wget 0
214 TestISOImage/Binaries/VBoxControl 0
215 TestISOImage/Binaries/VBoxService 0
318 TestISOImage/PersistentMounts//data 0
319 TestISOImage/PersistentMounts//var/lib/docker 0
320 TestISOImage/PersistentMounts//var/lib/cni 0
321 TestISOImage/PersistentMounts//var/lib/kubelet 0
322 TestISOImage/PersistentMounts//var/lib/minikube 0
323 TestISOImage/PersistentMounts//var/lib/toolbox 0
324 TestISOImage/PersistentMounts//var/lib/boot2docker 0
325 TestISOImage/VersionJSON 0
326 TestISOImage/eBPFSupport 7200.063
x
+
TestAddons/parallel/Ingress (157.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-638975 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-638975 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-638975 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [2fac4c62-e573-4306-8fca-879beef43c13] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [2fac4c62-e573-4306-8fca-879beef43c13] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.006802395s
I1119 21:50:09.288459  121369 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-638975 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-638975 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m16.092542616s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-638975 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-638975 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.215
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-638975 -n addons-638975
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-638975 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-638975 logs -n 25: (1.574760902s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
	โ”‚ COMMAND โ”‚                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  โ”‚       PROFILE        โ”‚  USER   โ”‚ VERSION โ”‚     START TIME      โ”‚      END TIME       โ”‚
	โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
	โ”‚ delete  โ”‚ -p download-only-103796                                                                                                                                                                                                                                                                                                                                                                                                                                                โ”‚ download-only-103796 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:47 UTC โ”‚ 19 Nov 25 21:47 UTC โ”‚
	โ”‚ start   โ”‚ --download-only -p binary-mirror-538182 --alsologtostderr --binary-mirror http://127.0.0.1:41025 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               โ”‚ binary-mirror-538182 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:47 UTC โ”‚                     โ”‚
	โ”‚ delete  โ”‚ -p binary-mirror-538182                                                                                                                                                                                                                                                                                                                                                                                                                                                โ”‚ binary-mirror-538182 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:47 UTC โ”‚ 19 Nov 25 21:47 UTC โ”‚
	โ”‚ addons  โ”‚ disable dashboard -p addons-638975                                                                                                                                                                                                                                                                                                                                                                                                                                     โ”‚ addons-638975        โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:47 UTC โ”‚                     โ”‚
	โ”‚ addons  โ”‚ enable dashboard -p addons-638975                                                                                                                                                                                                                                                                                                                                                                                                                                      โ”‚ addons-638975        โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:47 UTC โ”‚                     โ”‚
	โ”‚ start   โ”‚ -p addons-638975 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher โ”‚ addons-638975        โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:47 UTC โ”‚ 19 Nov 25 21:49 UTC โ”‚
	โ”‚ addons  โ”‚ addons-638975 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            โ”‚ addons-638975        โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:49 UTC โ”‚ 19 Nov 25 21:49 UTC โ”‚
	โ”‚ addons  โ”‚ addons-638975 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           โ”‚ addons-638975        โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:49 UTC โ”‚ 19 Nov 25 21:49 UTC โ”‚
	โ”‚ addons  โ”‚ enable headlamp -p addons-638975 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                โ”‚ addons-638975        โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:49 UTC โ”‚ 19 Nov 25 21:49 UTC โ”‚
	โ”‚ addons  โ”‚ addons-638975 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               โ”‚ addons-638975        โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:49 UTC โ”‚ 19 Nov 25 21:49 UTC โ”‚
	โ”‚ addons  โ”‚ addons-638975 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     โ”‚ addons-638975        โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:49 UTC โ”‚ 19 Nov 25 21:49 UTC โ”‚
	โ”‚ addons  โ”‚ addons-638975 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      โ”‚ addons-638975        โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:49 UTC โ”‚ 19 Nov 25 21:49 UTC โ”‚
	โ”‚ addons  โ”‚ addons-638975 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           โ”‚ addons-638975        โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:49 UTC โ”‚ 19 Nov 25 21:50 UTC โ”‚
	โ”‚ ip      โ”‚ addons-638975 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       โ”‚ addons-638975        โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:49 UTC โ”‚ 19 Nov 25 21:49 UTC โ”‚
	โ”‚ addons  โ”‚ addons-638975 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           โ”‚ addons-638975        โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:49 UTC โ”‚ 19 Nov 25 21:49 UTC โ”‚
	โ”‚ ssh     โ”‚ addons-638975 ssh cat /opt/local-path-provisioner/pvc-2210aff6-240c-446f-aed2-60b4ee919562_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      โ”‚ addons-638975        โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:50 UTC โ”‚ 19 Nov 25 21:50 UTC โ”‚
	โ”‚ addons  โ”‚ addons-638975 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        โ”‚ addons-638975        โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:50 UTC โ”‚ 19 Nov 25 21:50 UTC โ”‚
	โ”‚ ssh     โ”‚ addons-638975 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               โ”‚ addons-638975        โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:50 UTC โ”‚                     โ”‚
	โ”‚ addons  โ”‚ addons-638975 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   โ”‚ addons-638975        โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:50 UTC โ”‚ 19 Nov 25 21:50 UTC โ”‚
	โ”‚ addons  โ”‚ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-638975                                                                                                                                                                                                                                                                                                                                                                                         โ”‚ addons-638975        โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:50 UTC โ”‚ 19 Nov 25 21:50 UTC โ”‚
	โ”‚ addons  โ”‚ addons-638975 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     โ”‚ addons-638975        โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:50 UTC โ”‚ 19 Nov 25 21:50 UTC โ”‚
	โ”‚ addons  โ”‚ addons-638975 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               โ”‚ addons-638975        โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:50 UTC โ”‚ 19 Nov 25 21:50 UTC โ”‚
	โ”‚ addons  โ”‚ addons-638975 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    โ”‚ addons-638975        โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:50 UTC โ”‚ 19 Nov 25 21:50 UTC โ”‚
	โ”‚ addons  โ”‚ addons-638975 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                โ”‚ addons-638975        โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:50 UTC โ”‚ 19 Nov 25 21:50 UTC โ”‚
	โ”‚ ip      โ”‚ addons-638975 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       โ”‚ addons-638975        โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:52 UTC โ”‚ 19 Nov 25 21:52 UTC โ”‚
	โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 21:47:12
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 21:47:12.549611  121970 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:47:12.549858  121970 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:47:12.549867  121970 out.go:374] Setting ErrFile to fd 2...
	I1119 21:47:12.549871  121970 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:47:12.550064  121970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-117497/.minikube/bin
	I1119 21:47:12.550600  121970 out.go:368] Setting JSON to false
	I1119 21:47:12.551433  121970 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":12580,"bootTime":1763576253,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 21:47:12.551503  121970 start.go:143] virtualization: kvm guest
	I1119 21:47:12.553343  121970 out.go:179] * [addons-638975] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 21:47:12.555298  121970 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 21:47:12.555288  121970 notify.go:221] Checking for updates...
	I1119 21:47:12.558237  121970 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 21:47:12.559826  121970 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 21:47:12.561104  121970 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-117497/.minikube
	I1119 21:47:12.562473  121970 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 21:47:12.563610  121970 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 21:47:12.565046  121970 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 21:47:12.596955  121970 out.go:179] * Using the kvm2 driver based on user configuration
	I1119 21:47:12.598061  121970 start.go:309] selected driver: kvm2
	I1119 21:47:12.598081  121970 start.go:930] validating driver "kvm2" against <nil>
	I1119 21:47:12.598094  121970 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 21:47:12.598810  121970 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 21:47:12.599074  121970 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 21:47:12.599102  121970 cni.go:84] Creating CNI manager for ""
	I1119 21:47:12.599146  121970 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1119 21:47:12.599154  121970 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1119 21:47:12.599193  121970 start.go:353] cluster config:
	{Name:addons-638975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-638975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1119 21:47:12.599289  121970 iso.go:125] acquiring lock: {Name:mk95e5a645bfae75190ef550e02bd4f48b331040 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 21:47:12.600807  121970 out.go:179] * Starting "addons-638975" primary control-plane node in "addons-638975" cluster
	I1119 21:47:12.602040  121970 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 21:47:12.602087  121970 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 21:47:12.602099  121970 cache.go:65] Caching tarball of preloaded images
	I1119 21:47:12.602187  121970 preload.go:238] Found /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 21:47:12.602198  121970 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 21:47:12.602513  121970 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/config.json ...
	I1119 21:47:12.602537  121970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/config.json: {Name:mk4f3f49f5644844d6d44e62c9678b272687e448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:12.602687  121970 start.go:360] acquireMachinesLock for addons-638975: {Name:mk31ae761a53f3900bcf08a8c89eb1fc69e23fe3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1119 21:47:12.602738  121970 start.go:364] duration metric: took 35.982ยตs to acquireMachinesLock for "addons-638975"
	I1119 21:47:12.602757  121970 start.go:93] Provisioning new machine with config: &{Name:addons-638975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-638975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 21:47:12.602804  121970 start.go:125] createHost starting for "" (driver="kvm2")
	I1119 21:47:12.604334  121970 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1119 21:47:12.604507  121970 start.go:159] libmachine.API.Create for "addons-638975" (driver="kvm2")
	I1119 21:47:12.604537  121970 client.go:173] LocalClient.Create starting
	I1119 21:47:12.604646  121970 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem
	I1119 21:47:12.941925  121970 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem
	I1119 21:47:13.077312  121970 main.go:143] libmachine: creating domain...
	I1119 21:47:13.077335  121970 main.go:143] libmachine: creating network...
	I1119 21:47:13.078985  121970 main.go:143] libmachine: found existing default network
	I1119 21:47:13.079234  121970 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1119 21:47:13.079818  121970 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e0a890}
	I1119 21:47:13.079954  121970 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-638975</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1119 21:47:13.085477  121970 main.go:143] libmachine: creating private network mk-addons-638975 192.168.39.0/24...
	I1119 21:47:13.157224  121970 main.go:143] libmachine: private network mk-addons-638975 192.168.39.0/24 created
	I1119 21:47:13.157563  121970 main.go:143] libmachine: <network>
	  <name>mk-addons-638975</name>
	  <uuid>f8a936d4-3dd6-4ef7-9661-b5eadef4a28e</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:c8:61:70'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1119 21:47:13.157602  121970 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21918-117497/.minikube/machines/addons-638975 ...
	I1119 21:47:13.157628  121970 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21918-117497/.minikube/cache/iso/amd64/minikube-v1.37.0-1763575914-21918-amd64.iso
	I1119 21:47:13.157668  121970 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21918-117497/.minikube
	I1119 21:47:13.157747  121970 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21918-117497/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21918-117497/.minikube/cache/iso/amd64/minikube-v1.37.0-1763575914-21918-amd64.iso...
	I1119 21:47:13.405308  121970 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/addons-638975/id_rsa...
	I1119 21:47:13.624975  121970 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/addons-638975/addons-638975.rawdisk...
	I1119 21:47:13.625025  121970 main.go:143] libmachine: Writing magic tar header
	I1119 21:47:13.625047  121970 main.go:143] libmachine: Writing SSH key tar header
	I1119 21:47:13.625158  121970 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21918-117497/.minikube/machines/addons-638975 ...
	I1119 21:47:13.625243  121970 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/addons-638975
	I1119 21:47:13.625279  121970 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21918-117497/.minikube/machines/addons-638975 (perms=drwx------)
	I1119 21:47:13.625300  121970 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21918-117497/.minikube/machines
	I1119 21:47:13.625317  121970 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21918-117497/.minikube/machines (perms=drwxr-xr-x)
	I1119 21:47:13.625343  121970 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21918-117497/.minikube
	I1119 21:47:13.625354  121970 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21918-117497/.minikube (perms=drwxr-xr-x)
	I1119 21:47:13.625368  121970 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21918-117497
	I1119 21:47:13.625385  121970 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21918-117497 (perms=drwxrwxr-x)
	I1119 21:47:13.625404  121970 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1119 21:47:13.625420  121970 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1119 21:47:13.625437  121970 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1119 21:47:13.625449  121970 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1119 21:47:13.625458  121970 main.go:143] libmachine: checking permissions on dir: /home
	I1119 21:47:13.625467  121970 main.go:143] libmachine: skipping /home - not owner
	I1119 21:47:13.625477  121970 main.go:143] libmachine: defining domain...
	I1119 21:47:13.626963  121970 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-638975</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/addons-638975/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/addons-638975/addons-638975.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-638975'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1119 21:47:13.632136  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:20:90:1d in network default
	I1119 21:47:13.632763  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:13.632786  121970 main.go:143] libmachine: starting domain...
	I1119 21:47:13.632790  121970 main.go:143] libmachine: ensuring networks are active...
	I1119 21:47:13.633625  121970 main.go:143] libmachine: Ensuring network default is active
	I1119 21:47:13.634012  121970 main.go:143] libmachine: Ensuring network mk-addons-638975 is active
	I1119 21:47:13.634613  121970 main.go:143] libmachine: getting domain XML...
	I1119 21:47:13.635618  121970 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-638975</name>
	  <uuid>d2f6eefa-c919-44ed-8f0e-857a5c1f0052</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/addons-638975/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/addons-638975/addons-638975.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:21:7b:01'/>
	      <source network='mk-addons-638975'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:20:90:1d'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1119 21:47:14.922278  121970 main.go:143] libmachine: waiting for domain to start...
	I1119 21:47:14.923762  121970 main.go:143] libmachine: domain is now running
	I1119 21:47:14.923787  121970 main.go:143] libmachine: waiting for IP...
	I1119 21:47:14.924718  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:14.925409  121970 main.go:143] libmachine: no network interface addresses found for domain addons-638975 (source=lease)
	I1119 21:47:14.925428  121970 main.go:143] libmachine: trying to list again with source=arp
	I1119 21:47:14.925847  121970 main.go:143] libmachine: unable to find current IP address of domain addons-638975 in network mk-addons-638975 (interfaces detected: [])
	I1119 21:47:14.925947  121970 retry.go:31] will retry after 226.932908ms: waiting for domain to come up
	I1119 21:47:15.154404  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:15.155116  121970 main.go:143] libmachine: no network interface addresses found for domain addons-638975 (source=lease)
	I1119 21:47:15.155138  121970 main.go:143] libmachine: trying to list again with source=arp
	I1119 21:47:15.155460  121970 main.go:143] libmachine: unable to find current IP address of domain addons-638975 in network mk-addons-638975 (interfaces detected: [])
	I1119 21:47:15.155513  121970 retry.go:31] will retry after 236.628169ms: waiting for domain to come up
	I1119 21:47:15.394274  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:15.395097  121970 main.go:143] libmachine: no network interface addresses found for domain addons-638975 (source=lease)
	I1119 21:47:15.395119  121970 main.go:143] libmachine: trying to list again with source=arp
	I1119 21:47:15.395500  121970 main.go:143] libmachine: unable to find current IP address of domain addons-638975 in network mk-addons-638975 (interfaces detected: [])
	I1119 21:47:15.395550  121970 retry.go:31] will retry after 295.969531ms: waiting for domain to come up
	I1119 21:47:15.693131  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:15.693811  121970 main.go:143] libmachine: no network interface addresses found for domain addons-638975 (source=lease)
	I1119 21:47:15.693829  121970 main.go:143] libmachine: trying to list again with source=arp
	I1119 21:47:15.694195  121970 main.go:143] libmachine: unable to find current IP address of domain addons-638975 in network mk-addons-638975 (interfaces detected: [])
	I1119 21:47:15.694236  121970 retry.go:31] will retry after 411.381038ms: waiting for domain to come up
	I1119 21:47:16.106739  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:16.107500  121970 main.go:143] libmachine: no network interface addresses found for domain addons-638975 (source=lease)
	I1119 21:47:16.107525  121970 main.go:143] libmachine: trying to list again with source=arp
	I1119 21:47:16.107823  121970 main.go:143] libmachine: unable to find current IP address of domain addons-638975 in network mk-addons-638975 (interfaces detected: [])
	I1119 21:47:16.107864  121970 retry.go:31] will retry after 653.932147ms: waiting for domain to come up
	I1119 21:47:16.764163  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:16.765056  121970 main.go:143] libmachine: no network interface addresses found for domain addons-638975 (source=lease)
	I1119 21:47:16.765083  121970 main.go:143] libmachine: trying to list again with source=arp
	I1119 21:47:16.765430  121970 main.go:143] libmachine: unable to find current IP address of domain addons-638975 in network mk-addons-638975 (interfaces detected: [])
	I1119 21:47:16.765479  121970 retry.go:31] will retry after 779.178363ms: waiting for domain to come up
	I1119 21:47:17.545926  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:17.546780  121970 main.go:143] libmachine: no network interface addresses found for domain addons-638975 (source=lease)
	I1119 21:47:17.546800  121970 main.go:143] libmachine: trying to list again with source=arp
	I1119 21:47:17.547154  121970 main.go:143] libmachine: unable to find current IP address of domain addons-638975 in network mk-addons-638975 (interfaces detected: [])
	I1119 21:47:17.547197  121970 retry.go:31] will retry after 1.152999404s: waiting for domain to come up
	I1119 21:47:18.702472  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:18.703108  121970 main.go:143] libmachine: no network interface addresses found for domain addons-638975 (source=lease)
	I1119 21:47:18.703126  121970 main.go:143] libmachine: trying to list again with source=arp
	I1119 21:47:18.703449  121970 main.go:143] libmachine: unable to find current IP address of domain addons-638975 in network mk-addons-638975 (interfaces detected: [])
	I1119 21:47:18.703484  121970 retry.go:31] will retry after 1.138184876s: waiting for domain to come up
	I1119 21:47:19.843787  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:19.844485  121970 main.go:143] libmachine: no network interface addresses found for domain addons-638975 (source=lease)
	I1119 21:47:19.844503  121970 main.go:143] libmachine: trying to list again with source=arp
	I1119 21:47:19.844799  121970 main.go:143] libmachine: unable to find current IP address of domain addons-638975 in network mk-addons-638975 (interfaces detected: [])
	I1119 21:47:19.844833  121970 retry.go:31] will retry after 1.715076016s: waiting for domain to come up
	I1119 21:47:21.562748  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:21.563404  121970 main.go:143] libmachine: no network interface addresses found for domain addons-638975 (source=lease)
	I1119 21:47:21.563431  121970 main.go:143] libmachine: trying to list again with source=arp
	I1119 21:47:21.563781  121970 main.go:143] libmachine: unable to find current IP address of domain addons-638975 in network mk-addons-638975 (interfaces detected: [])
	I1119 21:47:21.563823  121970 retry.go:31] will retry after 1.804387633s: waiting for domain to come up
	I1119 21:47:23.369931  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:23.370624  121970 main.go:143] libmachine: no network interface addresses found for domain addons-638975 (source=lease)
	I1119 21:47:23.370642  121970 main.go:143] libmachine: trying to list again with source=arp
	I1119 21:47:23.371040  121970 main.go:143] libmachine: unable to find current IP address of domain addons-638975 in network mk-addons-638975 (interfaces detected: [])
	I1119 21:47:23.371084  121970 retry.go:31] will retry after 1.958292434s: waiting for domain to come up
	I1119 21:47:25.332282  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:25.333151  121970 main.go:143] libmachine: no network interface addresses found for domain addons-638975 (source=lease)
	I1119 21:47:25.333177  121970 main.go:143] libmachine: trying to list again with source=arp
	I1119 21:47:25.333479  121970 main.go:143] libmachine: unable to find current IP address of domain addons-638975 in network mk-addons-638975 (interfaces detected: [])
	I1119 21:47:25.333523  121970 retry.go:31] will retry after 3.524291994s: waiting for domain to come up
	I1119 21:47:28.860134  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:28.860832  121970 main.go:143] libmachine: no network interface addresses found for domain addons-638975 (source=lease)
	I1119 21:47:28.860854  121970 main.go:143] libmachine: trying to list again with source=arp
	I1119 21:47:28.861158  121970 main.go:143] libmachine: unable to find current IP address of domain addons-638975 in network mk-addons-638975 (interfaces detected: [])
	I1119 21:47:28.861198  121970 retry.go:31] will retry after 3.269463509s: waiting for domain to come up
	I1119 21:47:32.134604  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:32.135338  121970 main.go:143] libmachine: domain addons-638975 has current primary IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:32.135357  121970 main.go:143] libmachine: found domain IP: 192.168.39.215
	I1119 21:47:32.135365  121970 main.go:143] libmachine: reserving static IP address...
	I1119 21:47:32.135818  121970 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-638975", mac: "52:54:00:21:7b:01", ip: "192.168.39.215"} in network mk-addons-638975
	I1119 21:47:32.334825  121970 main.go:143] libmachine: reserved static IP address 192.168.39.215 for domain addons-638975
	I1119 21:47:32.334857  121970 main.go:143] libmachine: waiting for SSH...
	I1119 21:47:32.334916  121970 main.go:143] libmachine: Getting to WaitForSSH function...
	I1119 21:47:32.338156  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:32.338733  121970 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:7b:01", ip: ""} in network mk-addons-638975: {Iface:virbr1 ExpiryTime:2025-11-19 22:47:29 +0000 UTC Type:0 Mac:52:54:00:21:7b:01 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:minikube Clientid:01:52:54:00:21:7b:01}
	I1119 21:47:32.338765  121970 main.go:143] libmachine: domain addons-638975 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:32.339045  121970 main.go:143] libmachine: Using SSH client type: native
	I1119 21:47:32.339340  121970 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1119 21:47:32.339356  121970 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1119 21:47:32.452899  121970 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 21:47:32.453372  121970 main.go:143] libmachine: domain creation complete
	I1119 21:47:32.454848  121970 machine.go:94] provisionDockerMachine start ...
	I1119 21:47:32.457320  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:32.457731  121970 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:7b:01", ip: ""} in network mk-addons-638975: {Iface:virbr1 ExpiryTime:2025-11-19 22:47:29 +0000 UTC Type:0 Mac:52:54:00:21:7b:01 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-638975 Clientid:01:52:54:00:21:7b:01}
	I1119 21:47:32.457754  121970 main.go:143] libmachine: domain addons-638975 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:32.457922  121970 main.go:143] libmachine: Using SSH client type: native
	I1119 21:47:32.458125  121970 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1119 21:47:32.458137  121970 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 21:47:32.567741  121970 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1119 21:47:32.567782  121970 buildroot.go:166] provisioning hostname "addons-638975"
	I1119 21:47:32.570788  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:32.571338  121970 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:7b:01", ip: ""} in network mk-addons-638975: {Iface:virbr1 ExpiryTime:2025-11-19 22:47:29 +0000 UTC Type:0 Mac:52:54:00:21:7b:01 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-638975 Clientid:01:52:54:00:21:7b:01}
	I1119 21:47:32.571368  121970 main.go:143] libmachine: domain addons-638975 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:32.571550  121970 main.go:143] libmachine: Using SSH client type: native
	I1119 21:47:32.571744  121970 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1119 21:47:32.571756  121970 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-638975 && echo "addons-638975" | sudo tee /etc/hostname
	I1119 21:47:32.699266  121970 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-638975
	
	I1119 21:47:32.702175  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:32.702554  121970 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:7b:01", ip: ""} in network mk-addons-638975: {Iface:virbr1 ExpiryTime:2025-11-19 22:47:29 +0000 UTC Type:0 Mac:52:54:00:21:7b:01 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-638975 Clientid:01:52:54:00:21:7b:01}
	I1119 21:47:32.702586  121970 main.go:143] libmachine: domain addons-638975 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:32.702755  121970 main.go:143] libmachine: Using SSH client type: native
	I1119 21:47:32.703006  121970 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1119 21:47:32.703039  121970 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-638975' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-638975/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-638975' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 21:47:32.820778  121970 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 21:47:32.820815  121970 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21918-117497/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-117497/.minikube}
	I1119 21:47:32.820865  121970 buildroot.go:174] setting up certificates
	I1119 21:47:32.820901  121970 provision.go:84] configureAuth start
	I1119 21:47:32.823835  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:32.824213  121970 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:7b:01", ip: ""} in network mk-addons-638975: {Iface:virbr1 ExpiryTime:2025-11-19 22:47:29 +0000 UTC Type:0 Mac:52:54:00:21:7b:01 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-638975 Clientid:01:52:54:00:21:7b:01}
	I1119 21:47:32.824244  121970 main.go:143] libmachine: domain addons-638975 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:32.826563  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:32.826928  121970 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:7b:01", ip: ""} in network mk-addons-638975: {Iface:virbr1 ExpiryTime:2025-11-19 22:47:29 +0000 UTC Type:0 Mac:52:54:00:21:7b:01 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-638975 Clientid:01:52:54:00:21:7b:01}
	I1119 21:47:32.826971  121970 main.go:143] libmachine: domain addons-638975 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:32.827138  121970 provision.go:143] copyHostCerts
	I1119 21:47:32.827222  121970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem (1078 bytes)
	I1119 21:47:32.827381  121970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem (1123 bytes)
	I1119 21:47:32.827472  121970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem (1675 bytes)
	I1119 21:47:32.827558  121970 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem org=jenkins.addons-638975 san=[127.0.0.1 192.168.39.215 addons-638975 localhost minikube]
	I1119 21:47:33.167473  121970 provision.go:177] copyRemoteCerts
	I1119 21:47:33.167554  121970 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 21:47:33.170036  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:33.170350  121970 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:7b:01", ip: ""} in network mk-addons-638975: {Iface:virbr1 ExpiryTime:2025-11-19 22:47:29 +0000 UTC Type:0 Mac:52:54:00:21:7b:01 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-638975 Clientid:01:52:54:00:21:7b:01}
	I1119 21:47:33.170383  121970 main.go:143] libmachine: domain addons-638975 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:33.170536  121970 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/addons-638975/id_rsa Username:docker}
	I1119 21:47:33.256190  121970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 21:47:33.287561  121970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 21:47:33.318290  121970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1119 21:47:33.350668  121970 provision.go:87] duration metric: took 529.744384ms to configureAuth
	I1119 21:47:33.350714  121970 buildroot.go:189] setting minikube options for container-runtime
	I1119 21:47:33.351056  121970 config.go:182] Loaded profile config "addons-638975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:47:33.353645  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:33.354050  121970 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:7b:01", ip: ""} in network mk-addons-638975: {Iface:virbr1 ExpiryTime:2025-11-19 22:47:29 +0000 UTC Type:0 Mac:52:54:00:21:7b:01 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-638975 Clientid:01:52:54:00:21:7b:01}
	I1119 21:47:33.354074  121970 main.go:143] libmachine: domain addons-638975 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:33.354254  121970 main.go:143] libmachine: Using SSH client type: native
	I1119 21:47:33.354482  121970 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1119 21:47:33.354497  121970 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 21:47:33.610296  121970 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 21:47:33.610334  121970 machine.go:97] duration metric: took 1.15546329s to provisionDockerMachine
	I1119 21:47:33.610350  121970 client.go:176] duration metric: took 21.005804s to LocalClient.Create
	I1119 21:47:33.610375  121970 start.go:167] duration metric: took 21.005867777s to libmachine.API.Create "addons-638975"
	I1119 21:47:33.610395  121970 start.go:293] postStartSetup for "addons-638975" (driver="kvm2")
	I1119 21:47:33.610408  121970 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 21:47:33.610487  121970 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 21:47:33.613482  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:33.613918  121970 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:7b:01", ip: ""} in network mk-addons-638975: {Iface:virbr1 ExpiryTime:2025-11-19 22:47:29 +0000 UTC Type:0 Mac:52:54:00:21:7b:01 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-638975 Clientid:01:52:54:00:21:7b:01}
	I1119 21:47:33.613943  121970 main.go:143] libmachine: domain addons-638975 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:33.614094  121970 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/addons-638975/id_rsa Username:docker}
	I1119 21:47:33.704346  121970 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 21:47:33.709832  121970 info.go:137] Remote host: Buildroot 2025.02
	I1119 21:47:33.709869  121970 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/addons for local assets ...
	I1119 21:47:33.709976  121970 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/files for local assets ...
	I1119 21:47:33.710011  121970 start.go:296] duration metric: took 99.60069ms for postStartSetup
	I1119 21:47:33.713033  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:33.713380  121970 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:7b:01", ip: ""} in network mk-addons-638975: {Iface:virbr1 ExpiryTime:2025-11-19 22:47:29 +0000 UTC Type:0 Mac:52:54:00:21:7b:01 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-638975 Clientid:01:52:54:00:21:7b:01}
	I1119 21:47:33.713413  121970 main.go:143] libmachine: domain addons-638975 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:33.713642  121970 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/config.json ...
	I1119 21:47:33.713829  121970 start.go:128] duration metric: took 21.111013429s to createHost
	I1119 21:47:33.715848  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:33.716182  121970 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:7b:01", ip: ""} in network mk-addons-638975: {Iface:virbr1 ExpiryTime:2025-11-19 22:47:29 +0000 UTC Type:0 Mac:52:54:00:21:7b:01 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-638975 Clientid:01:52:54:00:21:7b:01}
	I1119 21:47:33.716202  121970 main.go:143] libmachine: domain addons-638975 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:33.716360  121970 main.go:143] libmachine: Using SSH client type: native
	I1119 21:47:33.716545  121970 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1119 21:47:33.716555  121970 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1119 21:47:33.826597  121970 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763588853.788257697
	
	I1119 21:47:33.826621  121970 fix.go:216] guest clock: 1763588853.788257697
	I1119 21:47:33.826629  121970 fix.go:229] Guest: 2025-11-19 21:47:33.788257697 +0000 UTC Remote: 2025-11-19 21:47:33.713840003 +0000 UTC m=+21.214025833 (delta=74.417694ms)
	I1119 21:47:33.826645  121970 fix.go:200] guest clock delta is within tolerance: 74.417694ms
	I1119 21:47:33.826669  121970 start.go:83] releasing machines lock for "addons-638975", held for 21.223901491s
	I1119 21:47:33.829397  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:33.829753  121970 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:7b:01", ip: ""} in network mk-addons-638975: {Iface:virbr1 ExpiryTime:2025-11-19 22:47:29 +0000 UTC Type:0 Mac:52:54:00:21:7b:01 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-638975 Clientid:01:52:54:00:21:7b:01}
	I1119 21:47:33.829770  121970 main.go:143] libmachine: domain addons-638975 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:33.830370  121970 ssh_runner.go:195] Run: cat /version.json
	I1119 21:47:33.830460  121970 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 21:47:33.833564  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:33.833642  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:33.833974  121970 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:7b:01", ip: ""} in network mk-addons-638975: {Iface:virbr1 ExpiryTime:2025-11-19 22:47:29 +0000 UTC Type:0 Mac:52:54:00:21:7b:01 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-638975 Clientid:01:52:54:00:21:7b:01}
	I1119 21:47:33.834011  121970 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:7b:01", ip: ""} in network mk-addons-638975: {Iface:virbr1 ExpiryTime:2025-11-19 22:47:29 +0000 UTC Type:0 Mac:52:54:00:21:7b:01 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-638975 Clientid:01:52:54:00:21:7b:01}
	I1119 21:47:33.834016  121970 main.go:143] libmachine: domain addons-638975 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:33.834035  121970 main.go:143] libmachine: domain addons-638975 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:33.834211  121970 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/addons-638975/id_rsa Username:docker}
	I1119 21:47:33.834345  121970 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/addons-638975/id_rsa Username:docker}
	I1119 21:47:33.938992  121970 ssh_runner.go:195] Run: systemctl --version
	I1119 21:47:33.945741  121970 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 21:47:34.108169  121970 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 21:47:34.115498  121970 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 21:47:34.115569  121970 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 21:47:34.136148  121970 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 21:47:34.136176  121970 start.go:496] detecting cgroup driver to use...
	I1119 21:47:34.136236  121970 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 21:47:34.156862  121970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 21:47:34.174563  121970 docker.go:218] disabling cri-docker service (if available) ...
	I1119 21:47:34.174655  121970 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 21:47:34.193149  121970 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 21:47:34.210116  121970 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 21:47:34.358110  121970 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 21:47:34.567823  121970 docker.go:234] disabling docker service ...
	I1119 21:47:34.567930  121970 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 21:47:34.585761  121970 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 21:47:34.602836  121970 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 21:47:34.763199  121970 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 21:47:34.906899  121970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 21:47:34.924041  121970 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 21:47:34.948950  121970 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 21:47:34.949033  121970 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:47:34.962485  121970 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 21:47:34.962579  121970 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:47:34.976331  121970 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:47:34.989794  121970 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:47:35.002999  121970 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 21:47:35.017161  121970 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:47:35.030999  121970 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:47:35.055402  121970 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:47:35.069458  121970 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 21:47:35.081579  121970 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1119 21:47:35.081662  121970 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1119 21:47:35.103706  121970 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 21:47:35.117438  121970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 21:47:35.269947  121970 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 21:47:35.382577  121970 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 21:47:35.382680  121970 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 21:47:35.388699  121970 start.go:564] Will wait 60s for crictl version
	I1119 21:47:35.388780  121970 ssh_runner.go:195] Run: which crictl
	I1119 21:47:35.393445  121970 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1119 21:47:35.441896  121970 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1119 21:47:35.441994  121970 ssh_runner.go:195] Run: crio --version
	I1119 21:47:35.471537  121970 ssh_runner.go:195] Run: crio --version
	I1119 21:47:35.503783  121970 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1119 21:47:35.507788  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:35.508254  121970 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:7b:01", ip: ""} in network mk-addons-638975: {Iface:virbr1 ExpiryTime:2025-11-19 22:47:29 +0000 UTC Type:0 Mac:52:54:00:21:7b:01 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-638975 Clientid:01:52:54:00:21:7b:01}
	I1119 21:47:35.508281  121970 main.go:143] libmachine: domain addons-638975 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:35.508517  121970 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1119 21:47:35.513594  121970 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 21:47:35.529226  121970 kubeadm.go:884] updating cluster {Name:addons-638975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-638975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 21:47:35.529339  121970 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 21:47:35.529381  121970 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 21:47:35.569530  121970 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1119 21:47:35.569599  121970 ssh_runner.go:195] Run: which lz4
	I1119 21:47:35.574372  121970 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1119 21:47:35.579715  121970 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1119 21:47:35.579759  121970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1119 21:47:37.168733  121970 crio.go:462] duration metric: took 1.594392305s to copy over tarball
	I1119 21:47:37.168823  121970 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1119 21:47:38.888412  121970 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.719558739s)
	I1119 21:47:38.888444  121970 crio.go:469] duration metric: took 1.719679049s to extract the tarball
	I1119 21:47:38.888455  121970 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1119 21:47:38.930172  121970 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 21:47:38.977937  121970 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 21:47:38.977967  121970 cache_images.go:86] Images are preloaded, skipping loading
	I1119 21:47:38.977980  121970 kubeadm.go:935] updating node { 192.168.39.215 8443 v1.34.1 crio true true} ...
	I1119 21:47:38.978115  121970 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-638975 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-638975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 21:47:38.978204  121970 ssh_runner.go:195] Run: crio config
	I1119 21:47:39.025294  121970 cni.go:84] Creating CNI manager for ""
	I1119 21:47:39.025321  121970 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1119 21:47:39.025340  121970 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 21:47:39.025380  121970 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.215 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-638975 NodeName:addons-638975 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 21:47:39.025521  121970 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.215
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-638975"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.215"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.215"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 21:47:39.025590  121970 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 21:47:39.039061  121970 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 21:47:39.039143  121970 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 21:47:39.051370  121970 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1119 21:47:39.072508  121970 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 21:47:39.093740  121970 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1119 21:47:39.114743  121970 ssh_runner.go:195] Run: grep 192.168.39.215	control-plane.minikube.internal$ /etc/hosts
	I1119 21:47:39.119035  121970 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.215	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 21:47:39.134381  121970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 21:47:39.276418  121970 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 21:47:39.307644  121970 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975 for IP: 192.168.39.215
	I1119 21:47:39.307673  121970 certs.go:195] generating shared ca certs ...
	I1119 21:47:39.307694  121970 certs.go:227] acquiring lock for ca certs: {Name:mk7fd1f1adfef6333505c39c1a982465562c820e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:39.307914  121970 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key
	I1119 21:47:39.561330  121970 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt ...
	I1119 21:47:39.561363  121970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt: {Name:mkc0173d6c0bf2a3a33a8da0cf593c60fe4447fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:39.561538  121970 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key ...
	I1119 21:47:39.561549  121970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key: {Name:mk59280c5478efe629d3a9b417f06598456cfc53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:39.561625  121970 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key
	I1119 21:47:39.852030  121970 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt ...
	I1119 21:47:39.852061  121970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt: {Name:mk7d490d7b7429390e7d6758cd6e3f755c13fb95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:39.852233  121970 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key ...
	I1119 21:47:39.852247  121970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key: {Name:mkc32995f4918ccc05217825f512911ef732fbc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:39.852317  121970 certs.go:257] generating profile certs ...
	I1119 21:47:39.852374  121970 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.key
	I1119 21:47:39.852396  121970 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt with IP's: []
	I1119 21:47:39.946792  121970 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt ...
	I1119 21:47:39.946822  121970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: {Name:mk693d35e5f85040b752769a75a0af3fa474924c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:39.946999  121970 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.key ...
	I1119 21:47:39.947013  121970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.key: {Name:mkc8298ac483570530130ee737a534f28cf150c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:39.947097  121970 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/apiserver.key.15abf909
	I1119 21:47:39.947118  121970 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/apiserver.crt.15abf909 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.215]
	I1119 21:47:40.015951  121970 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/apiserver.crt.15abf909 ...
	I1119 21:47:40.015979  121970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/apiserver.crt.15abf909: {Name:mk20c86683ca387eb0064b1c6c95bfd799dfbd00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:40.016130  121970 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/apiserver.key.15abf909 ...
	I1119 21:47:40.016154  121970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/apiserver.key.15abf909: {Name:mk75ce7e7c55a8ef231b3dfb410108d3070cf775 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:40.016234  121970 certs.go:382] copying /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/apiserver.crt.15abf909 -> /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/apiserver.crt
	I1119 21:47:40.016304  121970 certs.go:386] copying /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/apiserver.key.15abf909 -> /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/apiserver.key
	I1119 21:47:40.016350  121970 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/proxy-client.key
	I1119 21:47:40.016367  121970 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/proxy-client.crt with IP's: []
	I1119 21:47:40.369978  121970 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/proxy-client.crt ...
	I1119 21:47:40.370015  121970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/proxy-client.crt: {Name:mkac7596ae97d145d3fe205390cb7eff9e205f9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:40.370211  121970 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/proxy-client.key ...
	I1119 21:47:40.370225  121970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/proxy-client.key: {Name:mk819dffa72c086a556168768620a88ec0aac466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:40.370402  121970 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 21:47:40.370435  121970 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem (1078 bytes)
	I1119 21:47:40.370457  121970 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem (1123 bytes)
	I1119 21:47:40.370480  121970 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem (1675 bytes)
	I1119 21:47:40.371156  121970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 21:47:40.405232  121970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 21:47:40.438774  121970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 21:47:40.471672  121970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 21:47:40.504191  121970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1119 21:47:40.536564  121970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 21:47:40.573622  121970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 21:47:40.605965  121970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 21:47:40.638750  121970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 21:47:40.672621  121970 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 21:47:40.695687  121970 ssh_runner.go:195] Run: openssl version
	I1119 21:47:40.703098  121970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 21:47:40.717828  121970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:47:40.724496  121970 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:47:40.724571  121970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:47:40.733599  121970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 21:47:40.748053  121970 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 21:47:40.753451  121970 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 21:47:40.753530  121970 kubeadm.go:401] StartCluster: {Name:addons-638975 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-638975 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:47:40.753632  121970 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:47:40.753729  121970 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:47:40.796264  121970 cri.go:89] found id: ""
	I1119 21:47:40.796343  121970 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 21:47:40.810301  121970 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 21:47:40.822944  121970 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 21:47:40.835226  121970 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 21:47:40.835245  121970 kubeadm.go:158] found existing configuration files:
	
	I1119 21:47:40.835286  121970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 21:47:40.846820  121970 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 21:47:40.846907  121970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 21:47:40.859339  121970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 21:47:40.871736  121970 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 21:47:40.871818  121970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 21:47:40.884528  121970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 21:47:40.898914  121970 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 21:47:40.898983  121970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 21:47:40.913566  121970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 21:47:40.928814  121970 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 21:47:40.928896  121970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 21:47:40.945563  121970 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1119 21:47:41.008649  121970 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 21:47:41.008725  121970 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 21:47:41.125304  121970 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 21:47:41.125445  121970 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 21:47:41.125570  121970 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 21:47:41.139802  121970 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 21:47:41.165762  121970 out.go:252]   - Generating certificates and keys ...
	I1119 21:47:41.165960  121970 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 21:47:41.166069  121970 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 21:47:41.477051  121970 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 21:47:41.976053  121970 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 21:47:42.468276  121970 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 21:47:42.644095  121970 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 21:47:42.915165  121970 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 21:47:42.915319  121970 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-638975 localhost] and IPs [192.168.39.215 127.0.0.1 ::1]
	I1119 21:47:42.988540  121970 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 21:47:42.988697  121970 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-638975 localhost] and IPs [192.168.39.215 127.0.0.1 ::1]
	I1119 21:47:43.176807  121970 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 21:47:43.458956  121970 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 21:47:43.552970  121970 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 21:47:43.553155  121970 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 21:47:43.620097  121970 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 21:47:43.653273  121970 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 21:47:43.798360  121970 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 21:47:43.984173  121970 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 21:47:44.051957  121970 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 21:47:44.052116  121970 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 21:47:44.055745  121970 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 21:47:44.122110  121970 out.go:252]   - Booting up control plane ...
	I1119 21:47:44.122223  121970 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 21:47:44.122290  121970 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 21:47:44.122349  121970 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 21:47:44.122457  121970 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 21:47:44.122603  121970 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 21:47:44.122734  121970 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 21:47:44.122846  121970 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 21:47:44.122920  121970 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 21:47:44.270854  121970 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 21:47:44.271006  121970 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 21:47:45.271716  121970 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001603181s
	I1119 21:47:45.273981  121970 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 21:47:45.274115  121970 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.215:8443/livez
	I1119 21:47:45.274276  121970 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 21:47:45.274389  121970 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 21:47:47.602378  121970 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.330353628s
	I1119 21:47:49.033315  121970 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.762545391s
	I1119 21:47:51.273074  121970 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.003803424s
	I1119 21:47:51.297044  121970 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 21:47:51.310168  121970 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 21:47:51.325750  121970 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 21:47:51.325947  121970 kubeadm.go:319] [mark-control-plane] Marking the node addons-638975 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 21:47:51.341430  121970 kubeadm.go:319] [bootstrap-token] Using token: 6omf0x.9t7h3c9f8h2ftpq7
	I1119 21:47:51.342773  121970 out.go:252]   - Configuring RBAC rules ...
	I1119 21:47:51.342912  121970 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 21:47:51.357955  121970 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 21:47:51.372610  121970 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 21:47:51.378372  121970 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 21:47:51.383763  121970 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 21:47:51.387968  121970 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 21:47:51.680165  121970 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 21:47:52.157075  121970 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 21:47:52.686168  121970 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 21:47:52.687124  121970 kubeadm.go:319] 
	I1119 21:47:52.687224  121970 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 21:47:52.687252  121970 kubeadm.go:319] 
	I1119 21:47:52.687334  121970 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 21:47:52.687345  121970 kubeadm.go:319] 
	I1119 21:47:52.687367  121970 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 21:47:52.687435  121970 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 21:47:52.687496  121970 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 21:47:52.687504  121970 kubeadm.go:319] 
	I1119 21:47:52.687572  121970 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 21:47:52.687582  121970 kubeadm.go:319] 
	I1119 21:47:52.687619  121970 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 21:47:52.687626  121970 kubeadm.go:319] 
	I1119 21:47:52.687703  121970 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 21:47:52.687800  121970 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 21:47:52.687861  121970 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 21:47:52.687885  121970 kubeadm.go:319] 
	I1119 21:47:52.687963  121970 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 21:47:52.688037  121970 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 21:47:52.688044  121970 kubeadm.go:319] 
	I1119 21:47:52.688126  121970 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 6omf0x.9t7h3c9f8h2ftpq7 \
	I1119 21:47:52.688232  121970 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:16739887fd324db7a8cdea6e893402f3bc5dd7c816a4db75c774aad3217d7565 \
	I1119 21:47:52.688262  121970 kubeadm.go:319] 	--control-plane 
	I1119 21:47:52.688274  121970 kubeadm.go:319] 
	I1119 21:47:52.688387  121970 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 21:47:52.688396  121970 kubeadm.go:319] 
	I1119 21:47:52.688512  121970 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 6omf0x.9t7h3c9f8h2ftpq7 \
	I1119 21:47:52.688638  121970 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:16739887fd324db7a8cdea6e893402f3bc5dd7c816a4db75c774aad3217d7565 
	I1119 21:47:52.690929  121970 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 21:47:52.690963  121970 cni.go:84] Creating CNI manager for ""
	I1119 21:47:52.690971  121970 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1119 21:47:52.693464  121970 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1119 21:47:52.694731  121970 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1119 21:47:52.708617  121970 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1119 21:47:52.736102  121970 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 21:47:52.736189  121970 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-638975 minikube.k8s.io/updated_at=2025_11_19T21_47_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=addons-638975 minikube.k8s.io/primary=true
	I1119 21:47:52.736210  121970 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:52.869793  121970 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:52.904152  121970 ops.go:34] apiserver oom_adj: -16
	I1119 21:47:53.370788  121970 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:53.870500  121970 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:54.370663  121970 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:54.870925  121970 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:55.369923  121970 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:55.869994  121970 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:56.370529  121970 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:56.870148  121970 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:56.965290  121970 kubeadm.go:1114] duration metric: took 4.229184716s to wait for elevateKubeSystemPrivileges
	I1119 21:47:56.965342  121970 kubeadm.go:403] duration metric: took 16.211814188s to StartCluster
	I1119 21:47:56.965370  121970 settings.go:142] acquiring lock: {Name:mk7bf46f049c1d627501587bc2954f8687f12cc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:56.965515  121970 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 21:47:56.965996  121970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/kubeconfig: {Name:mk079b5c589536bd895a38d6eaf0adbccf891fc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:56.966219  121970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 21:47:56.966270  121970 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 21:47:56.966303  121970 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1119 21:47:56.966424  121970 addons.go:70] Setting yakd=true in profile "addons-638975"
	I1119 21:47:56.966452  121970 addons.go:239] Setting addon yakd=true in "addons-638975"
	I1119 21:47:56.966453  121970 addons.go:70] Setting ingress-dns=true in profile "addons-638975"
	I1119 21:47:56.966465  121970 addons.go:70] Setting registry-creds=true in profile "addons-638975"
	I1119 21:47:56.966487  121970 addons.go:239] Setting addon ingress-dns=true in "addons-638975"
	I1119 21:47:56.966488  121970 config.go:182] Loaded profile config "addons-638975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:47:56.966498  121970 addons.go:239] Setting addon registry-creds=true in "addons-638975"
	I1119 21:47:56.966509  121970 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-638975"
	I1119 21:47:56.966502  121970 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-638975"
	I1119 21:47:56.966496  121970 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-638975"
	I1119 21:47:56.966526  121970 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-638975"
	I1119 21:47:56.966489  121970 host.go:66] Checking if "addons-638975" exists ...
	I1119 21:47:56.966534  121970 host.go:66] Checking if "addons-638975" exists ...
	I1119 21:47:56.966538  121970 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-638975"
	I1119 21:47:56.966546  121970 addons.go:70] Setting inspektor-gadget=true in profile "addons-638975"
	I1119 21:47:56.966552  121970 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-638975"
	I1119 21:47:56.966558  121970 addons.go:239] Setting addon inspektor-gadget=true in "addons-638975"
	I1119 21:47:56.966570  121970 host.go:66] Checking if "addons-638975" exists ...
	I1119 21:47:56.966573  121970 host.go:66] Checking if "addons-638975" exists ...
	I1119 21:47:56.966576  121970 host.go:66] Checking if "addons-638975" exists ...
	I1119 21:47:56.967181  121970 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-638975"
	I1119 21:47:56.967205  121970 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-638975"
	I1119 21:47:56.967230  121970 host.go:66] Checking if "addons-638975" exists ...
	I1119 21:47:56.967254  121970 addons.go:70] Setting gcp-auth=true in profile "addons-638975"
	I1119 21:47:56.967281  121970 mustload.go:66] Loading cluster: addons-638975
	I1119 21:47:56.967301  121970 addons.go:70] Setting volcano=true in profile "addons-638975"
	I1119 21:47:56.967319  121970 addons.go:239] Setting addon volcano=true in "addons-638975"
	I1119 21:47:56.967346  121970 host.go:66] Checking if "addons-638975" exists ...
	I1119 21:47:56.967461  121970 config.go:182] Loaded profile config "addons-638975": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:47:56.967513  121970 addons.go:70] Setting registry=true in profile "addons-638975"
	I1119 21:47:56.967532  121970 addons.go:239] Setting addon registry=true in "addons-638975"
	I1119 21:47:56.967561  121970 host.go:66] Checking if "addons-638975" exists ...
	I1119 21:47:56.966534  121970 host.go:66] Checking if "addons-638975" exists ...
	I1119 21:47:56.967746  121970 addons.go:70] Setting ingress=true in profile "addons-638975"
	I1119 21:47:56.967762  121970 addons.go:239] Setting addon ingress=true in "addons-638975"
	I1119 21:47:56.967790  121970 host.go:66] Checking if "addons-638975" exists ...
	I1119 21:47:56.966500  121970 addons.go:70] Setting storage-provisioner=true in profile "addons-638975"
	I1119 21:47:56.968213  121970 addons.go:239] Setting addon storage-provisioner=true in "addons-638975"
	I1119 21:47:56.968271  121970 host.go:66] Checking if "addons-638975" exists ...
	I1119 21:47:56.968339  121970 out.go:179] * Verifying Kubernetes components...
	I1119 21:47:56.968430  121970 addons.go:70] Setting cloud-spanner=true in profile "addons-638975"
	I1119 21:47:56.968447  121970 addons.go:239] Setting addon cloud-spanner=true in "addons-638975"
	I1119 21:47:56.968470  121970 host.go:66] Checking if "addons-638975" exists ...
	I1119 21:47:56.968488  121970 addons.go:70] Setting volumesnapshots=true in profile "addons-638975"
	I1119 21:47:56.968507  121970 addons.go:239] Setting addon volumesnapshots=true in "addons-638975"
	I1119 21:47:56.968529  121970 host.go:66] Checking if "addons-638975" exists ...
	I1119 21:47:56.968944  121970 addons.go:70] Setting default-storageclass=true in profile "addons-638975"
	I1119 21:47:56.969052  121970 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-638975"
	I1119 21:47:56.968359  121970 addons.go:70] Setting metrics-server=true in profile "addons-638975"
	I1119 21:47:56.969184  121970 addons.go:239] Setting addon metrics-server=true in "addons-638975"
	I1119 21:47:56.969206  121970 host.go:66] Checking if "addons-638975" exists ...
	I1119 21:47:56.969727  121970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 21:47:56.974672  121970 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1119 21:47:56.974705  121970 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1119 21:47:56.974746  121970 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1119 21:47:56.974764  121970 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	W1119 21:47:56.975750  121970 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1119 21:47:56.975435  121970 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-638975"
	I1119 21:47:56.976026  121970 host.go:66] Checking if "addons-638975" exists ...
	I1119 21:47:56.975865  121970 host.go:66] Checking if "addons-638975" exists ...
	I1119 21:47:56.976327  121970 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1119 21:47:56.976709  121970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1119 21:47:56.977029  121970 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1119 21:47:56.977250  121970 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1119 21:47:56.977597  121970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1119 21:47:56.977262  121970 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1119 21:47:56.977959  121970 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1119 21:47:56.977975  121970 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1119 21:47:56.977959  121970 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1119 21:47:56.978651  121970 addons.go:239] Setting addon default-storageclass=true in "addons-638975"
	I1119 21:47:56.978932  121970 host.go:66] Checking if "addons-638975" exists ...
	I1119 21:47:56.979609  121970 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1119 21:47:56.979632  121970 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1119 21:47:56.979612  121970 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1119 21:47:56.979645  121970 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1119 21:47:56.979660  121970 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1119 21:47:56.980929  121970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1119 21:47:56.980457  121970 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 21:47:56.980463  121970 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1119 21:47:56.980510  121970 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1119 21:47:56.981065  121970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1119 21:47:56.980538  121970 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1119 21:47:56.981298  121970 out.go:179]   - Using image docker.io/busybox:stable
	I1119 21:47:56.981358  121970 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1119 21:47:56.981680  121970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1119 21:47:56.981366  121970 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1119 21:47:56.981778  121970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1119 21:47:56.982014  121970 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1119 21:47:56.982029  121970 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1119 21:47:56.982655  121970 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1119 21:47:56.982679  121970 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1119 21:47:56.982969  121970 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1119 21:47:56.982688  121970 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 21:47:56.983064  121970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 21:47:56.982704  121970 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1119 21:47:56.983133  121970 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 21:47:56.983264  121970 out.go:179]   - Using image docker.io/registry:3.0.0
	I1119 21:47:56.983280  121970 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 21:47:56.984044  121970 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1119 21:47:56.984330  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:56.984867  121970 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1119 21:47:56.984917  121970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1119 21:47:56.985146  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:56.985417  121970 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:7b:01", ip: ""} in network mk-addons-638975: {Iface:virbr1 ExpiryTime:2025-11-19 22:47:29 +0000 UTC Type:0 Mac:52:54:00:21:7b:01 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-638975 Clientid:01:52:54:00:21:7b:01}
	I1119 21:47:56.985450  121970 main.go:143] libmachine: domain addons-638975 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:56.985576  121970 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1119 21:47:56.985621  121970 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1119 21:47:56.985628  121970 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1119 21:47:56.985632  121970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1119 21:47:56.985818  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:56.986858  121970 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/addons-638975/id_rsa Username:docker}
	I1119 21:47:56.987047  121970 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1119 21:47:56.987067  121970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1119 21:47:56.988004  121970 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:7b:01", ip: ""} in network mk-addons-638975: {Iface:virbr1 ExpiryTime:2025-11-19 22:47:29 +0000 UTC Type:0 Mac:52:54:00:21:7b:01 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-638975 Clientid:01:52:54:00:21:7b:01}
	I1119 21:47:56.988113  121970 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1119 21:47:56.988156  121970 main.go:143] libmachine: domain addons-638975 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:56.988110  121970 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:7b:01", ip: ""} in network mk-addons-638975: {Iface:virbr1 ExpiryTime:2025-11-19 22:47:29 +0000 UTC Type:0 Mac:52:54:00:21:7b:01 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-638975 Clientid:01:52:54:00:21:7b:01}
	I1119 21:47:56.988363  121970 main.go:143] libmachine: domain addons-638975 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:56.989190  121970 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/addons-638975/id_rsa Username:docker}
	I1119 21:47:56.989285  121970 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/addons-638975/id_rsa Username:docker}
	I1119 21:47:56.990247  121970 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1119 21:47:56.991502  121970 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1119 21:47:56.992502  121970 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1119 21:47:56.993430  121970 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1119 21:47:56.993460  121970 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1119 21:47:56.993498  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:56.993672  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:56.994427  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:56.994893  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:56.994956  121970 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:7b:01", ip: ""} in network mk-addons-638975: {Iface:virbr1 ExpiryTime:2025-11-19 22:47:29 +0000 UTC Type:0 Mac:52:54:00:21:7b:01 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-638975 Clientid:01:52:54:00:21:7b:01}
	I1119 21:47:56.994970  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:56.994994  121970 main.go:143] libmachine: domain addons-638975 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:56.995358  121970 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:7b:01", ip: ""} in network mk-addons-638975: {Iface:virbr1 ExpiryTime:2025-11-19 22:47:29 +0000 UTC Type:0 Mac:52:54:00:21:7b:01 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-638975 Clientid:01:52:54:00:21:7b:01}
	I1119 21:47:56.995390  121970 main.go:143] libmachine: domain addons-638975 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:56.995693  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:56.995897  121970 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:7b:01", ip: ""} in network mk-addons-638975: {Iface:virbr1 ExpiryTime:2025-11-19 22:47:29 +0000 UTC Type:0 Mac:52:54:00:21:7b:01 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-638975 Clientid:01:52:54:00:21:7b:01}
	I1119 21:47:56.995949  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:56.996006  121970 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/addons-638975/id_rsa Username:docker}
	I1119 21:47:56.996049  121970 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/addons-638975/id_rsa Username:docker}
	I1119 21:47:56.996095  121970 main.go:143] libmachine: domain addons-638975 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:56.996142  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:56.996375  121970 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:7b:01", ip: ""} in network mk-addons-638975: {Iface:virbr1 ExpiryTime:2025-11-19 22:47:29 +0000 UTC Type:0 Mac:52:54:00:21:7b:01 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-638975 Clientid:01:52:54:00:21:7b:01}
	I1119 21:47:56.996401  121970 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:7b:01", ip: ""} in network mk-addons-638975: {Iface:virbr1 ExpiryTime:2025-11-19 22:47:29 +0000 UTC Type:0 Mac:52:54:00:21:7b:01 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-638975 Clientid:01:52:54:00:21:7b:01}
	I1119 21:47:56.996406  121970 main.go:143] libmachine: domain addons-638975 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:56.996430  121970 main.go:143] libmachine: domain addons-638975 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:56.996604  121970 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/addons-638975/id_rsa Username:docker}
	I1119 21:47:56.997136  121970 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/addons-638975/id_rsa Username:docker}
	I1119 21:47:56.997210  121970 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:7b:01", ip: ""} in network mk-addons-638975: {Iface:virbr1 ExpiryTime:2025-11-19 22:47:29 +0000 UTC Type:0 Mac:52:54:00:21:7b:01 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-638975 Clientid:01:52:54:00:21:7b:01}
	I1119 21:47:56.997243  121970 main.go:143] libmachine: domain addons-638975 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:56.997273  121970 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:7b:01", ip: ""} in network mk-addons-638975: {Iface:virbr1 ExpiryTime:2025-11-19 22:47:29 +0000 UTC Type:0 Mac:52:54:00:21:7b:01 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-638975 Clientid:01:52:54:00:21:7b:01}
	I1119 21:47:56.997305  121970 main.go:143] libmachine: domain addons-638975 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:56.997325  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:56.997139  121970 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/addons-638975/id_rsa Username:docker}
	I1119 21:47:56.997729  121970 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:7b:01", ip: ""} in network mk-addons-638975: {Iface:virbr1 ExpiryTime:2025-11-19 22:47:29 +0000 UTC Type:0 Mac:52:54:00:21:7b:01 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-638975 Clientid:01:52:54:00:21:7b:01}
	I1119 21:47:56.997767  121970 main.go:143] libmachine: domain addons-638975 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:56.997800  121970 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/addons-638975/id_rsa Username:docker}
	I1119 21:47:56.997849  121970 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/addons-638975/id_rsa Username:docker}
	I1119 21:47:56.998149  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:56.998275  121970 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/addons-638975/id_rsa Username:docker}
	I1119 21:47:56.998623  121970 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:7b:01", ip: ""} in network mk-addons-638975: {Iface:virbr1 ExpiryTime:2025-11-19 22:47:29 +0000 UTC Type:0 Mac:52:54:00:21:7b:01 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-638975 Clientid:01:52:54:00:21:7b:01}
	I1119 21:47:56.998655  121970 main.go:143] libmachine: domain addons-638975 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:56.998966  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:56.998958  121970 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/addons-638975/id_rsa Username:docker}
	I1119 21:47:56.999172  121970 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:7b:01", ip: ""} in network mk-addons-638975: {Iface:virbr1 ExpiryTime:2025-11-19 22:47:29 +0000 UTC Type:0 Mac:52:54:00:21:7b:01 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-638975 Clientid:01:52:54:00:21:7b:01}
	I1119 21:47:56.999200  121970 main.go:143] libmachine: domain addons-638975 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:56.999387  121970 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/addons-638975/id_rsa Username:docker}
	I1119 21:47:56.999591  121970 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:7b:01", ip: ""} in network mk-addons-638975: {Iface:virbr1 ExpiryTime:2025-11-19 22:47:29 +0000 UTC Type:0 Mac:52:54:00:21:7b:01 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-638975 Clientid:01:52:54:00:21:7b:01}
	I1119 21:47:56.999624  121970 main.go:143] libmachine: domain addons-638975 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:56.999821  121970 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/addons-638975/id_rsa Username:docker}
	I1119 21:47:57.000560  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:57.001062  121970 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:7b:01", ip: ""} in network mk-addons-638975: {Iface:virbr1 ExpiryTime:2025-11-19 22:47:29 +0000 UTC Type:0 Mac:52:54:00:21:7b:01 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-638975 Clientid:01:52:54:00:21:7b:01}
	I1119 21:47:57.001098  121970 main.go:143] libmachine: domain addons-638975 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:47:57.001280  121970 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/addons-638975/id_rsa Username:docker}
	W1119 21:47:57.315046  121970 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:58914->192.168.39.215:22: read: connection reset by peer
	I1119 21:47:57.315084  121970 retry.go:31] will retry after 269.380121ms: ssh: handshake failed: read tcp 192.168.39.1:58914->192.168.39.215:22: read: connection reset by peer
	I1119 21:47:57.666505  121970 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1119 21:47:58.040401  121970 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1119 21:47:58.040435  121970 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1119 21:47:58.063990  121970 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1119 21:47:58.064171  121970 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1119 21:47:58.094526  121970 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1119 21:47:58.148011  121970 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 21:47:58.151890  121970 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1119 21:47:58.165527  121970 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1119 21:47:58.168113  121970 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 21:47:58.216492  121970 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1119 21:47:58.244405  121970 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1119 21:47:58.244438  121970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1119 21:47:58.265406  121970 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1119 21:47:58.265447  121970 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1119 21:47:58.319319  121970 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1119 21:47:58.319354  121970 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1119 21:47:58.334621  121970 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1119 21:47:58.334655  121970 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1119 21:47:58.341464  121970 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.3717064s)
	I1119 21:47:58.341553  121970 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 21:47:58.341464  121970 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.375209347s)
	I1119 21:47:58.341826  121970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 21:47:58.372470  121970 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1119 21:47:58.836070  121970 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1119 21:47:58.836100  121970 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1119 21:47:58.972113  121970 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1119 21:47:58.972143  121970 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1119 21:47:58.977008  121970 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1119 21:47:58.977041  121970 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1119 21:47:59.026006  121970 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1119 21:47:59.026047  121970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1119 21:47:59.093717  121970 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1119 21:47:59.093748  121970 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1119 21:47:59.319867  121970 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1119 21:47:59.319908  121970 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1119 21:47:59.349293  121970 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1119 21:47:59.435377  121970 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1119 21:47:59.435407  121970 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1119 21:47:59.557856  121970 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1119 21:47:59.557899  121970 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1119 21:47:59.659942  121970 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1119 21:47:59.659977  121970 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1119 21:47:59.798728  121970 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1119 21:47:59.832814  121970 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1119 21:47:59.832838  121970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1119 21:47:59.944586  121970 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1119 21:47:59.944616  121970 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1119 21:48:00.044479  121970 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1119 21:48:00.044508  121970 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1119 21:48:00.187221  121970 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1119 21:48:00.274783  121970 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1119 21:48:00.274810  121970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1119 21:48:00.510904  121970 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1119 21:48:00.510932  121970 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1119 21:48:00.751981  121970 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1119 21:48:01.228530  121970 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1119 21:48:01.228554  121970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1119 21:48:01.736263  121970 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.06970906s)
	I1119 21:48:01.776191  121970 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1119 21:48:01.776226  121970 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1119 21:48:02.182155  121970 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1119 21:48:02.182182  121970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1119 21:48:02.723426  121970 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1119 21:48:02.723456  121970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1119 21:48:03.295452  121970 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1119 21:48:03.295485  121970 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1119 21:48:03.775678  121970 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1119 21:48:04.407045  121970 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1119 21:48:04.410070  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:48:04.410525  121970 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:7b:01", ip: ""} in network mk-addons-638975: {Iface:virbr1 ExpiryTime:2025-11-19 22:47:29 +0000 UTC Type:0 Mac:52:54:00:21:7b:01 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-638975 Clientid:01:52:54:00:21:7b:01}
	I1119 21:48:04.410552  121970 main.go:143] libmachine: domain addons-638975 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:48:04.410727  121970 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/addons-638975/id_rsa Username:docker}
	I1119 21:48:04.589180  121970 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (6.52496828s)
	I1119 21:48:04.589281  121970 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (6.525247075s)
	I1119 21:48:04.589327  121970 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.494769524s)
	I1119 21:48:04.589398  121970 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.44135827s)
	I1119 21:48:04.589446  121970 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (6.437526643s)
	I1119 21:48:05.149444  121970 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1119 21:48:05.163079  121970 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.997506042s)
	I1119 21:48:05.163132  121970 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.994984918s)
	I1119 21:48:05.163163  121970 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.946636917s)
	I1119 21:48:05.163217  121970 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.821639655s)
	I1119 21:48:05.163309  121970 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.821455186s)
	I1119 21:48:05.163333  121970 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1119 21:48:05.163958  121970 node_ready.go:35] waiting up to 6m0s for node "addons-638975" to be "Ready" ...
	I1119 21:48:05.243752  121970 node_ready.go:49] node "addons-638975" is "Ready"
	I1119 21:48:05.243787  121970 node_ready.go:38] duration metric: took 79.803363ms for node "addons-638975" to be "Ready" ...
	I1119 21:48:05.243803  121970 api_server.go:52] waiting for apiserver process to appear ...
	I1119 21:48:05.243858  121970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1119 21:48:05.296989  121970 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1119 21:48:05.462240  121970 addons.go:239] Setting addon gcp-auth=true in "addons-638975"
	I1119 21:48:05.462309  121970 host.go:66] Checking if "addons-638975" exists ...
	I1119 21:48:05.464446  121970 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1119 21:48:05.467006  121970 main.go:143] libmachine: domain addons-638975 has defined MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:48:05.467449  121970 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:7b:01", ip: ""} in network mk-addons-638975: {Iface:virbr1 ExpiryTime:2025-11-19 22:47:29 +0000 UTC Type:0 Mac:52:54:00:21:7b:01 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-638975 Clientid:01:52:54:00:21:7b:01}
	I1119 21:48:05.467480  121970 main.go:143] libmachine: domain addons-638975 has defined IP address 192.168.39.215 and MAC address 52:54:00:21:7b:01 in network mk-addons-638975
	I1119 21:48:05.467677  121970 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/addons-638975/id_rsa Username:docker}
	I1119 21:48:05.679719  121970 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-638975" context rescaled to 1 replicas
	I1119 21:48:07.096482  121970 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.723958556s)
	I1119 21:48:07.096540  121970 addons.go:480] Verifying addon ingress=true in "addons-638975"
	I1119 21:48:07.096576  121970 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.747239788s)
	I1119 21:48:07.096607  121970 addons.go:480] Verifying addon registry=true in "addons-638975"
	I1119 21:48:07.096719  121970 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.909468809s)
	I1119 21:48:07.096676  121970 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.297914227s)
	I1119 21:48:07.096909  121970 addons.go:480] Verifying addon metrics-server=true in "addons-638975"
	I1119 21:48:07.098516  121970 out.go:179] * Verifying ingress addon...
	I1119 21:48:07.098550  121970 out.go:179] * Verifying registry addon...
	I1119 21:48:07.098516  121970 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-638975 service yakd-dashboard -n yakd-dashboard
	
	I1119 21:48:07.100663  121970 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1119 21:48:07.100788  121970 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1119 21:48:07.167679  121970 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1119 21:48:07.167709  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:07.167841  121970 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1119 21:48:07.167865  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:07.249968  121970 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.49793175s)
	W1119 21:48:07.250035  121970 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1119 21:48:07.250064  121970 retry.go:31] will retry after 191.678819ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1119 21:48:07.441996  121970 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1119 21:48:07.612596  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:07.612683  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:08.114018  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:08.116660  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:08.648826  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:08.652405  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:08.731246  121970 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.955516569s)
	I1119 21:48:08.731298  121970 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-638975"
	I1119 21:48:08.731330  121970 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.487446187s)
	I1119 21:48:08.731366  121970 api_server.go:72] duration metric: took 11.765058282s to wait for apiserver process to appear ...
	I1119 21:48:08.731378  121970 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.266904431s)
	I1119 21:48:08.731381  121970 api_server.go:88] waiting for apiserver healthz status ...
	I1119 21:48:08.731653  121970 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8443/healthz ...
	I1119 21:48:08.733046  121970 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1119 21:48:08.733051  121970 out.go:179] * Verifying csi-hostpath-driver addon...
	I1119 21:48:08.734902  121970 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1119 21:48:08.735713  121970 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1119 21:48:08.736292  121970 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1119 21:48:08.736314  121970 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1119 21:48:08.766203  121970 api_server.go:279] https://192.168.39.215:8443/healthz returned 200:
	ok
	I1119 21:48:08.768144  121970 api_server.go:141] control plane version: v1.34.1
	I1119 21:48:08.768183  121970 api_server.go:131] duration metric: took 36.558022ms to wait for apiserver health ...
	I1119 21:48:08.768196  121970 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 21:48:08.823365  121970 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1119 21:48:08.823387  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:08.849535  121970 system_pods.go:59] 20 kube-system pods found
	I1119 21:48:08.849578  121970 system_pods.go:61] "amd-gpu-device-plugin-tdz7q" [948249da-73f9-41f3-9a33-b17975b2bd66] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1119 21:48:08.849589  121970 system_pods.go:61] "coredns-66bc5c9577-8kzpl" [7d6e38c3-4e4b-40a9-807c-ef109a7d8fa9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 21:48:08.849599  121970 system_pods.go:61] "coredns-66bc5c9577-jb6g9" [f21c2a09-59e1-46cc-948e-753fc789c4f5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 21:48:08.849607  121970 system_pods.go:61] "csi-hostpath-attacher-0" [7404d6f1-83d6-40c3-9e44-7ad8aaa77a92] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 21:48:08.849613  121970 system_pods.go:61] "csi-hostpath-resizer-0" [410196f6-f9a4-4ec3-ac90-857d164afaf9] Pending
	I1119 21:48:08.849621  121970 system_pods.go:61] "csi-hostpathplugin-4mqnv" [d0b8adf8-4a4f-4c40-ba74-6cfec44145a5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 21:48:08.849626  121970 system_pods.go:61] "etcd-addons-638975" [4c84fa30-c509-432b-ab50-c019d2f47bb0] Running
	I1119 21:48:08.849632  121970 system_pods.go:61] "kube-apiserver-addons-638975" [36fe444d-d590-4f03-9e57-b13f7b6a6fa2] Running
	I1119 21:48:08.849637  121970 system_pods.go:61] "kube-controller-manager-addons-638975" [f47608cf-b70e-4d4c-88ec-aef0b85019f8] Running
	I1119 21:48:08.849645  121970 system_pods.go:61] "kube-ingress-dns-minikube" [2a849854-a281-483d-b625-f7cdca4b399d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 21:48:08.849650  121970 system_pods.go:61] "kube-proxy-nx8qh" [c98e5734-a3cf-4fca-87a4-a5e427ad8692] Running
	I1119 21:48:08.849656  121970 system_pods.go:61] "kube-scheduler-addons-638975" [943a2e01-8118-4ba7-a934-2e9c098f06d9] Running
	I1119 21:48:08.849664  121970 system_pods.go:61] "metrics-server-85b7d694d7-xltwn" [3373e30d-0efb-4d43-a703-3d25e2b814da] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 21:48:08.849677  121970 system_pods.go:61] "nvidia-device-plugin-daemonset-p5fgq" [dc9daaaa-4317-43a2-b831-d637962e0d5f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 21:48:08.849687  121970 system_pods.go:61] "registry-6b586f9694-s4jwv" [a2066b20-dd37-4423-97f0-1146b27baf9f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 21:48:08.849705  121970 system_pods.go:61] "registry-creds-764b6fb674-kb2wn" [60c12ae5-6dba-495b-8b8e-3c710e63e5c1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 21:48:08.849714  121970 system_pods.go:61] "registry-proxy-xjlbv" [d9c8a85c-fb02-41d8-b56a-31dd13e0aca3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 21:48:08.849722  121970 system_pods.go:61] "snapshot-controller-7d9fbc56b8-wbmw2" [35d0c6ba-797b-455c-af58-eeb8566b522d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:48:08.849731  121970 system_pods.go:61] "snapshot-controller-7d9fbc56b8-xvrkj" [b78005f0-636a-45af-b657-37b4ee024c63] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:48:08.849742  121970 system_pods.go:61] "storage-provisioner" [f28f1868-60a3-4c85-8016-170819751f0b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 21:48:08.849755  121970 system_pods.go:74] duration metric: took 81.550554ms to wait for pod list to return data ...
	I1119 21:48:08.849772  121970 default_sa.go:34] waiting for default service account to be created ...
	I1119 21:48:08.865171  121970 default_sa.go:45] found service account: "default"
	I1119 21:48:08.865198  121970 default_sa.go:55] duration metric: took 15.418452ms for default service account to be created ...
	I1119 21:48:08.865210  121970 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 21:48:08.872420  121970 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1119 21:48:08.872448  121970 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1119 21:48:08.897767  121970 system_pods.go:86] 20 kube-system pods found
	I1119 21:48:08.897814  121970 system_pods.go:89] "amd-gpu-device-plugin-tdz7q" [948249da-73f9-41f3-9a33-b17975b2bd66] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1119 21:48:08.897828  121970 system_pods.go:89] "coredns-66bc5c9577-8kzpl" [7d6e38c3-4e4b-40a9-807c-ef109a7d8fa9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 21:48:08.897841  121970 system_pods.go:89] "coredns-66bc5c9577-jb6g9" [f21c2a09-59e1-46cc-948e-753fc789c4f5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 21:48:08.897855  121970 system_pods.go:89] "csi-hostpath-attacher-0" [7404d6f1-83d6-40c3-9e44-7ad8aaa77a92] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 21:48:08.897867  121970 system_pods.go:89] "csi-hostpath-resizer-0" [410196f6-f9a4-4ec3-ac90-857d164afaf9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 21:48:08.897900  121970 system_pods.go:89] "csi-hostpathplugin-4mqnv" [d0b8adf8-4a4f-4c40-ba74-6cfec44145a5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 21:48:08.897911  121970 system_pods.go:89] "etcd-addons-638975" [4c84fa30-c509-432b-ab50-c019d2f47bb0] Running
	I1119 21:48:08.897918  121970 system_pods.go:89] "kube-apiserver-addons-638975" [36fe444d-d590-4f03-9e57-b13f7b6a6fa2] Running
	I1119 21:48:08.897924  121970 system_pods.go:89] "kube-controller-manager-addons-638975" [f47608cf-b70e-4d4c-88ec-aef0b85019f8] Running
	I1119 21:48:08.897936  121970 system_pods.go:89] "kube-ingress-dns-minikube" [2a849854-a281-483d-b625-f7cdca4b399d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 21:48:08.897943  121970 system_pods.go:89] "kube-proxy-nx8qh" [c98e5734-a3cf-4fca-87a4-a5e427ad8692] Running
	I1119 21:48:08.897951  121970 system_pods.go:89] "kube-scheduler-addons-638975" [943a2e01-8118-4ba7-a934-2e9c098f06d9] Running
	I1119 21:48:08.897963  121970 system_pods.go:89] "metrics-server-85b7d694d7-xltwn" [3373e30d-0efb-4d43-a703-3d25e2b814da] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 21:48:08.897974  121970 system_pods.go:89] "nvidia-device-plugin-daemonset-p5fgq" [dc9daaaa-4317-43a2-b831-d637962e0d5f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 21:48:08.897984  121970 system_pods.go:89] "registry-6b586f9694-s4jwv" [a2066b20-dd37-4423-97f0-1146b27baf9f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 21:48:08.897997  121970 system_pods.go:89] "registry-creds-764b6fb674-kb2wn" [60c12ae5-6dba-495b-8b8e-3c710e63e5c1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 21:48:08.898006  121970 system_pods.go:89] "registry-proxy-xjlbv" [d9c8a85c-fb02-41d8-b56a-31dd13e0aca3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 21:48:08.898017  121970 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wbmw2" [35d0c6ba-797b-455c-af58-eeb8566b522d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:48:08.898026  121970 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xvrkj" [b78005f0-636a-45af-b657-37b4ee024c63] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:48:08.898040  121970 system_pods.go:89] "storage-provisioner" [f28f1868-60a3-4c85-8016-170819751f0b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 21:48:08.898052  121970 system_pods.go:126] duration metric: took 32.834133ms to wait for k8s-apps to be running ...
	I1119 21:48:08.898069  121970 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 21:48:08.898134  121970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 21:48:09.047299  121970 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1119 21:48:09.047325  121970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1119 21:48:09.109729  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:09.110842  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:09.139603  121970 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1119 21:48:09.243789  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:09.608618  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:09.608706  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:09.746547  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:10.094195  121970 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.652131064s)
	I1119 21:48:10.094223  121970 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.196062291s)
	I1119 21:48:10.094248  121970 system_svc.go:56] duration metric: took 1.196176306s WaitForService to wait for kubelet
	I1119 21:48:10.094261  121970 kubeadm.go:587] duration metric: took 13.127952026s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 21:48:10.094294  121970 node_conditions.go:102] verifying NodePressure condition ...
	I1119 21:48:10.101452  121970 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 21:48:10.101485  121970 node_conditions.go:123] node cpu capacity is 2
	I1119 21:48:10.101500  121970 node_conditions.go:105] duration metric: took 7.200094ms to run NodePressure ...
	I1119 21:48:10.101514  121970 start.go:242] waiting for startup goroutines ...
	I1119 21:48:10.111625  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:10.111744  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:10.248894  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:10.656016  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:10.681530  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:10.703689  121970 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.564042375s)
	I1119 21:48:10.704907  121970 addons.go:480] Verifying addon gcp-auth=true in "addons-638975"
	I1119 21:48:10.706986  121970 out.go:179] * Verifying gcp-auth addon...
	I1119 21:48:10.708806  121970 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1119 21:48:10.759102  121970 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1119 21:48:10.759131  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:10.776021  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:11.111475  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:11.112291  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:11.223194  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:11.321851  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:11.608865  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:11.608897  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:11.714912  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:11.742744  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:12.106286  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:12.106471  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:12.212252  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:12.242839  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:12.608859  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:12.610283  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:12.714296  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:12.741864  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:13.107432  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:13.108802  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:13.217277  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:13.239908  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:13.607197  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:13.607288  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:13.713433  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:13.741373  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:14.107083  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:14.107364  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:14.214113  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:14.240317  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:14.610060  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:14.610208  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:14.714110  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:14.742227  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:15.126590  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:15.126697  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:15.213117  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:15.240767  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:15.605539  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:15.606918  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:15.713375  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:15.741068  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:16.105311  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:16.105522  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:16.213163  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:16.240289  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:16.606002  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:16.609108  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:16.713141  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:16.739945  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:17.113759  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:17.113988  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:17.213600  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:17.242531  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:17.607921  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:17.608012  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:17.714419  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:17.740467  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:18.109445  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:18.112082  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:18.215198  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:18.241174  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:18.606742  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:18.608092  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:18.712762  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:18.750261  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:19.206448  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:19.208751  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:19.213244  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:19.243271  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:19.607381  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:19.612944  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:19.713139  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:19.742239  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:20.105597  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:20.105847  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:20.212749  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:20.241138  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:20.607471  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:20.607627  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:20.716263  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:20.815028  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:21.105747  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:21.105959  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:21.213788  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:21.240679  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:21.611935  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:21.612472  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:21.715502  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:21.743398  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:22.107257  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:22.110152  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:22.216102  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:22.244616  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:22.607169  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:22.609022  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:22.712639  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:22.744154  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:23.108408  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:23.108458  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:23.215420  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:23.317524  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:23.608787  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:23.611897  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:23.714593  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:23.740215  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:24.105218  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:24.107293  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:24.213694  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:24.242439  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:24.609618  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:24.613652  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:24.713313  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:24.747734  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:25.105366  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:25.106613  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:25.213519  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:25.242046  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:25.630609  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:25.631443  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:25.713532  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:25.740604  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:26.146620  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:26.149203  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:26.236846  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:26.249525  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:26.607638  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:26.608061  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:26.712408  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:26.741863  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:27.107591  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:27.107920  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:27.213596  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:27.241587  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:27.608179  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:27.611512  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:27.979769  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:27.981941  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:28.108507  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:28.111311  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:28.215172  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:28.241350  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:28.607251  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:28.608039  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:28.713296  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:28.742398  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:29.104707  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:29.105643  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:29.213296  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:29.241227  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:29.607498  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:29.607618  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:29.715780  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:29.743476  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:30.107514  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:30.107717  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:30.216618  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:30.242623  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:30.609328  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:30.611414  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:30.717593  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:30.746354  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:31.114096  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:31.114639  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:31.214699  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:31.242757  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:31.606329  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:31.606410  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:31.712407  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:31.739950  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:32.105688  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:32.106177  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:32.213484  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:32.251656  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:32.607598  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:32.607871  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:32.712433  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:32.740006  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:33.107651  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:33.107816  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:33.215205  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:33.242459  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:33.607620  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:33.607788  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:33.713367  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:33.741174  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:34.109490  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:34.109642  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:34.212680  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:34.241421  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:34.604805  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:34.606740  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:34.714469  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:34.740564  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:35.105102  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:35.105403  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:35.213112  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:35.239889  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:35.605974  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:35.606032  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:35.711961  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:35.739775  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:36.115665  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:36.117024  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:36.213525  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:36.244568  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:36.607489  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:36.608225  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:36.712672  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:36.744305  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:37.112184  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:37.112391  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:37.219934  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:37.241382  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:37.605070  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:37.605212  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:37.714337  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:37.742313  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:38.256392  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:38.259796  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:38.260017  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:38.260057  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:38.605360  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:38.605933  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:38.718987  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:38.748136  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:39.107036  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:39.108193  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:39.213498  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:39.241355  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:39.605832  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:39.605984  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:39.715009  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:39.739773  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:40.107557  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:40.107750  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:40.212533  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:40.243041  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:40.607006  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:40.608781  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:40.712996  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:40.740401  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:41.109839  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:41.112285  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:41.212293  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:41.242492  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:41.608500  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:41.616238  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:41.713929  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:41.741971  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:42.104492  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:42.104854  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:42.212050  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:42.239600  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:42.606840  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:42.606936  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:42.712988  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:42.739673  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:43.105695  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:43.105755  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:43.213979  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:43.239700  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:43.607398  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:43.607566  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:43.715604  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:43.745415  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:44.107190  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:44.107800  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:44.213989  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:44.240666  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:44.607085  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:44.607230  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:44.714659  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:44.743272  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:45.116396  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:45.118254  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:45.217712  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:45.244370  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:45.608121  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:45.611003  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:45.716051  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:45.741380  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:46.111229  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:46.111230  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:46.213116  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:46.243737  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:46.608844  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:46.609504  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:46.713468  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:46.741821  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:47.128980  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:47.155479  121970 kapi.go:107] duration metric: took 40.054811647s to wait for kubernetes.io/minikube-addons=registry ...
	I1119 21:48:47.242468  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:47.243114  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:47.607153  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:47.712431  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:47.741074  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:48.105773  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:48.214895  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:48.238835  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:48.609357  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:48.714554  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:48.744525  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:49.106990  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:49.215842  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:49.243299  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:49.607786  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:49.713047  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:49.740071  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:50.106509  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:50.213942  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:50.245198  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:50.612355  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:50.720989  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:50.740145  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:51.110149  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:51.216118  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:51.248413  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:51.604849  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:51.715915  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:51.741036  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:52.105994  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:52.212528  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:52.242578  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:52.610025  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:52.714546  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:52.739762  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:53.105277  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:53.212848  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:53.242676  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:53.605472  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:53.715572  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:53.746675  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:54.107667  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:54.212691  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:54.241904  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:54.607013  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:54.856834  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:54.858027  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:55.109155  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:55.215669  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:55.244352  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:55.608692  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:55.716249  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:55.744384  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:56.107517  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:56.217201  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:56.245070  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:56.605676  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:56.713671  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:56.744373  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:57.105629  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:57.213256  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:57.241050  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:57.605174  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:57.719148  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:57.743658  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:58.109127  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:58.212065  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:58.239344  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:58.604974  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:58.724820  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:58.742514  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:59.116379  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:59.213061  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:59.242744  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:59.604753  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:59.713870  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:59.740496  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:49:00.109511  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:00.216098  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:00.241630  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:49:00.611334  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:00.713735  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:00.744637  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:49:01.107447  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:01.216285  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:01.240641  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:49:01.605600  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:01.713492  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:01.742672  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:49:02.106116  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:02.213153  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:02.245659  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:49:02.640671  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:02.744117  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:02.745353  121970 kapi.go:107] duration metric: took 54.009645252s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1119 21:49:03.106398  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:03.212375  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:03.604405  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:03.712789  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:04.104871  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:04.211899  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:04.604720  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:04.714924  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:05.105463  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:05.214229  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:05.605983  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:05.712073  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:06.108448  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:06.213349  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:06.606294  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:06.713666  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:07.105169  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:07.212965  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:07.609813  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:07.716286  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:08.105011  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:08.212426  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:08.605382  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:08.712984  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:09.105196  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:09.213302  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:09.605386  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:09.712646  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:10.104479  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:10.212613  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:10.604749  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:10.712840  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:11.106998  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:11.213532  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:11.606344  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:11.714582  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:12.108800  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:12.214399  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:12.614544  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:12.717736  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:13.105454  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:13.213870  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:13.607346  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:13.713624  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:14.106965  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:14.216107  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:14.606168  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:14.713987  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:15.106195  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:15.411795  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:15.609113  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:15.714411  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:16.113753  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:16.216529  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:16.605040  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:16.712258  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:17.105086  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:17.215012  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:17.609228  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:17.713832  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:18.111306  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:18.212412  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:18.607506  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:18.714449  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:19.105659  121970 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:19.214298  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:19.605723  121970 kapi.go:107] duration metric: took 1m12.504932754s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1119 21:49:19.712856  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:20.220475  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:20.714044  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:21.215634  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:21.713601  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:22.216946  121970 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:22.713851  121970 kapi.go:107] duration metric: took 1m12.005042488s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1119 21:49:22.715699  121970 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-638975 cluster.
	I1119 21:49:22.716903  121970 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1119 21:49:22.718049  121970 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1119 21:49:22.719215  121970 out.go:179] * Enabled addons: cloud-spanner, inspektor-gadget, amd-gpu-device-plugin, ingress-dns, storage-provisioner, registry-creds, nvidia-device-plugin, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1119 21:49:22.720537  121970 addons.go:515] duration metric: took 1m25.754225856s for enable addons: enabled=[cloud-spanner inspektor-gadget amd-gpu-device-plugin ingress-dns storage-provisioner registry-creds nvidia-device-plugin storage-provisioner-rancher metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1119 21:49:22.720591  121970 start.go:247] waiting for cluster config update ...
	I1119 21:49:22.720621  121970 start.go:256] writing updated cluster config ...
	I1119 21:49:22.720959  121970 ssh_runner.go:195] Run: rm -f paused
	I1119 21:49:22.728329  121970 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 21:49:22.732436  121970 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8kzpl" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:22.738479  121970 pod_ready.go:94] pod "coredns-66bc5c9577-8kzpl" is "Ready"
	I1119 21:49:22.738501  121970 pod_ready.go:86] duration metric: took 6.042976ms for pod "coredns-66bc5c9577-8kzpl" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:22.741243  121970 pod_ready.go:83] waiting for pod "etcd-addons-638975" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:22.746727  121970 pod_ready.go:94] pod "etcd-addons-638975" is "Ready"
	I1119 21:49:22.746755  121970 pod_ready.go:86] duration metric: took 5.48843ms for pod "etcd-addons-638975" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:22.749029  121970 pod_ready.go:83] waiting for pod "kube-apiserver-addons-638975" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:22.754332  121970 pod_ready.go:94] pod "kube-apiserver-addons-638975" is "Ready"
	I1119 21:49:22.754355  121970 pod_ready.go:86] duration metric: took 5.307013ms for pod "kube-apiserver-addons-638975" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:22.757226  121970 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-638975" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:23.133466  121970 pod_ready.go:94] pod "kube-controller-manager-addons-638975" is "Ready"
	I1119 21:49:23.133493  121970 pod_ready.go:86] duration metric: took 376.245727ms for pod "kube-controller-manager-addons-638975" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:23.333530  121970 pod_ready.go:83] waiting for pod "kube-proxy-nx8qh" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:23.732983  121970 pod_ready.go:94] pod "kube-proxy-nx8qh" is "Ready"
	I1119 21:49:23.733020  121970 pod_ready.go:86] duration metric: took 399.463735ms for pod "kube-proxy-nx8qh" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:23.933169  121970 pod_ready.go:83] waiting for pod "kube-scheduler-addons-638975" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:24.334400  121970 pod_ready.go:94] pod "kube-scheduler-addons-638975" is "Ready"
	I1119 21:49:24.334434  121970 pod_ready.go:86] duration metric: took 401.226233ms for pod "kube-scheduler-addons-638975" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:24.334453  121970 pod_ready.go:40] duration metric: took 1.606077342s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 21:49:24.382285  121970 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 21:49:24.385100  121970 out.go:179] * Done! kubectl is now configured to use "addons-638975" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 21:52:26 addons-638975 crio[819]: time="2025-11-19 21:52:26.923546399Z" level=debug msg="Setting container's log_path = /var/log/pods/default_hello-world-app-5d498dc89-pm4hf_607558a0-5f49-49fc-a9df-452ab38f3178, sbox.logdir = hello-world-app/0.log, ctr.logfile = /var/log/pods/default_hello-world-app-5d498dc89-pm4hf_607558a0-5f49-49fc-a9df-452ab38f3178/hello-world-app/0.log" file="container/container.go:453"
	Nov 19 21:52:26 addons-638975 crio[819]: time="2025-11-19 21:52:26.924300382Z" level=debug msg="CONTAINER USER: 0" file="server/container_create.go:223" id=afbcbd80-83d9-4018-be91-55cd81d64764 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 21:52:26 addons-638975 crio[819]: time="2025-11-19 21:52:26.924800643Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/24c5ce86970b9408e01e84f0142256f4c45a8cab7b4bb2f5fa0fa4eb1f0898c2/merged/etc/passwd: no such file or directory" file="utils/utils.go:170"
	Nov 19 21:52:26 addons-638975 crio[819]: time="2025-11-19 21:52:26.925534325Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/24c5ce86970b9408e01e84f0142256f4c45a8cab7b4bb2f5fa0fa4eb1f0898c2/merged/etc/group: no such file or directory" file="utils/utils.go:177"
	Nov 19 21:52:26 addons-638975 crio[819]: time="2025-11-19 21:52:26.925675323Z" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription" file="subscriptions/subscriptions.go:207"
	Nov 19 21:52:26 addons-638975 crio[819]: time="2025-11-19 21:52:26.926294291Z" level=debug msg="Setting stage for resource k8s_hello-world-app_hello-world-app-5d498dc89-pm4hf_default_607558a0-5f49-49fc-a9df-452ab38f3178_0 from container spec configuration to container runtime creation" file="resourcestore/resourcestore.go:227" id=afbcbd80-83d9-4018-be91-55cd81d64764 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 21:52:26 addons-638975 crio[819]: time="2025-11-19 21:52:26.926365565Z" level=debug msg="running conmon: /usr/bin/conmon" args="[-b /var/run/containers/storage/overlay-containers/e0c109654999ffe8c854cbbe97b62775a61350443315fde4b0f006e2cf2b6dd5/userdata -c e0c109654999ffe8c854cbbe97b62775a61350443315fde4b0f006e2cf2b6dd5 --exit-dir /var/run/crio/exits -l /var/log/pods/default_hello-world-app-5d498dc89-pm4hf_607558a0-5f49-49fc-a9df-452ab38f3178/hello-world-app/0.log --log-level debug -n k8s_hello-world-app_hello-world-app-5d498dc89-pm4hf_default_607558a0-5f49-49fc-a9df-452ab38f3178_0 -P /var/run/containers/storage/overlay-containers/e0c109654999ffe8c854cbbe97b62775a61350443315fde4b0f006e2cf2b6dd5/userdata/conmon-pidfile -p /var/run/containers/storage/overlay-containers/e0c109654999ffe8c854cbbe97b62775a61350443315fde4b0f006e2cf2b6dd5/userdata/pidfile --persist-dir /var/lib/containers/storage/overlay-containers/e0c109654999ffe8c854cbbe97b62775a61350443315fde4b0f006e2cf2b6dd5/userdata -r /usr/bin/ru
nc --runtime-arg --root=/run/runc --socket-dir-path /var/run/crio --syslog -u e0c109654999ffe8c854cbbe97b62775a61350443315fde4b0f006e2cf2b6dd5]" file="oci/runtime_oci.go:168" id=afbcbd80-83d9-4018-be91-55cd81d64764 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 21:52:26 addons-638975 conmon[12594]: conmon e0c109654999ffe8c854 <ndebug>: addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/12/attach}
	Nov 19 21:52:26 addons-638975 conmon[12594]: conmon e0c109654999ffe8c854 <ndebug>: terminal_ctrl_fd: 12
	Nov 19 21:52:26 addons-638975 conmon[12594]: conmon e0c109654999ffe8c854 <ndebug>: winsz read side: 16, winsz write side: 17
	Nov 19 21:52:26 addons-638975 conmon[12594]: conmon e0c109654999ffe8c854 <ndebug>: container PID: 12610
	Nov 19 21:52:26 addons-638975 crio[819]: time="2025-11-19 21:52:26.977854474Z" level=debug msg="Received container pid: 12610" file="oci/runtime_oci.go:284" id=afbcbd80-83d9-4018-be91-55cd81d64764 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 21:52:26 addons-638975 crio[819]: time="2025-11-19 21:52:26.988321246Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=343ef88d-30bc-4cb5-aa6d-a4abd0dab1e0 name=/runtime.v1.RuntimeService/Version
	Nov 19 21:52:26 addons-638975 crio[819]: time="2025-11-19 21:52:26.988678779Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=343ef88d-30bc-4cb5-aa6d-a4abd0dab1e0 name=/runtime.v1.RuntimeService/Version
	Nov 19 21:52:26 addons-638975 crio[819]: time="2025-11-19 21:52:26.991916006Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=187d744b-df2d-41ac-81b2-91f4562d4192 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 21:52:26 addons-638975 crio[819]: time="2025-11-19 21:52:26.993862264Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763589146993747750,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597202,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=187d744b-df2d-41ac-81b2-91f4562d4192 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 21:52:26 addons-638975 crio[819]: time="2025-11-19 21:52:26.995215606Z" level=info msg="Created container e0c109654999ffe8c854cbbe97b62775a61350443315fde4b0f006e2cf2b6dd5: default/hello-world-app-5d498dc89-pm4hf/hello-world-app" file="server/container_create.go:491" id=afbcbd80-83d9-4018-be91-55cd81d64764 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 21:52:26 addons-638975 crio[819]: time="2025-11-19 21:52:26.995329376Z" level=debug msg="Response: &CreateContainerResponse{ContainerId:e0c109654999ffe8c854cbbe97b62775a61350443315fde4b0f006e2cf2b6dd5,}" file="otel-collector/interceptors.go:74" id=afbcbd80-83d9-4018-be91-55cd81d64764 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 21:52:26 addons-638975 crio[819]: time="2025-11-19 21:52:26.995659707Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3e2276d-16ea-4923-a1cf-faf2b3d50055 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 21:52:26 addons-638975 crio[819]: time="2025-11-19 21:52:26.995723047Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3e2276d-16ea-4923-a1cf-faf2b3d50055 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 21:52:26 addons-638975 crio[819]: time="2025-11-19 21:52:26.996143133Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e0c109654999ffe8c854cbbe97b62775a61350443315fde4b0f006e2cf2b6dd5,PodSandboxId:b3104f8c785376fda032dfe65afbdee09041cd396bb74ff9dd8420676c7ab1a1,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_CREATED,CreatedAt:1763589146922817723,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d498dc89-pm4hf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 607558a0-5f49-49fc-a9df-452ab38f3178,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8371375202c668a0c02a0d551939339978603e9e5ecb48533a622bac03803df8,PodSandboxId:61c815bd8c3208cd8955bfb1adeb5fdf51af6e96c71a923dbe7440e0c9870955,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763589003563801411,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2fac4c62-e573-4306-8fca-879beef43c13,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d489e6f165a577d5a1c02f789275009c2a52d9523c94688447209d05cf13c07,PodSandboxId:4e1174c032534094120dddd1372db47784fb96e6d83201310b2767a50ee21427,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763588968000246007,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 524d4592-b3ee-4ebd-a2
bb-c0e0834a1ed7,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06c29341c6b002ca07f5878266e3e8cec8b3be1b995a8c56518aae389a3cf1a7,PodSandboxId:bb7bc046135b67801204c8e0dd992d2e3ae5d2e130c47702bfe4ad4896e247aa,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763588958651241740,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-rzmcn,io.kubernetes.pod.namespace: ingress-nginx,io.
kubernetes.pod.uid: 4607941d-ef50-41ac-928a-ffa37c7fcd0d,},Annotations:map[string]string{io.kubernetes.container.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:eadf20b943f0624c4859f306a13afdee3b3b0efaaf230b454172bb37489cfb41,PodSandboxId:8a873ee3d65f0f12a835ce11f506854a8f0c5f812beab958b2e9614ffa674c69,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifi
edImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763588946111611757,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-x46tz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5648c297-1599-440c-9044-8c6e79da0f5b,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e2aea509860b8d2f68564b8ed9bee53cc0bc27ac93550c20915bbcc9d4f58f,PodSandboxId:42e5833f4347cc70d0ac93eeb18f6f3dd1c73eaea4dbbe5d761b855bde9c6839,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotat
ions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763588930352828835,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-647cq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f7e48804-cdeb-4dd8-ad85-aac232cbf5fc,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27e4c4c87ea700226266f2fede4a49e0305db2ec19b4c7b1d31e6a3dc05a43a4,PodSandboxId:cc1f84290fff49cd96086e3c680747d48b12aab5971c42342a8251ba49bd5c95,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd34
6a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763588909270983750,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a849854-a281-483d-b625-f7cdca4b399d,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f43d5284aa6eb81f823edc06b7ac245a47151e70d547b14ced4d8be29149a9e,PodSandboxId:849e6a185adc817e012eb6272aa9811d04dcb055f18ffbfa7c4646696ec094a7,Metadata:&ContainerMetadata{Name
:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763588885927512476,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-tdz7q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 948249da-73f9-41f3-9a33-b17975b2bd66,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffff7f567396240d424f64bb45cde10f45948f6470abda0bfe80eb132c0555aa,PodSandboxId:75dca618f513eb4f499c93f15f8aba0cf0b1f59d3c89960ffcd25bb2e
8b34abd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763588886868111028,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28f1868-60a3-4c85-8016-170819751f0b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0d4d921d09244b73a0fda09d47485b20ccdb04377d0c913f413bfe458eed3f0,PodSandboxId:90965dd394974921f0190597a5dbc237ea6f05d69e03e608da5bf21b7abbc4c5,Meta
data:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763588878673622682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-8kzpl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d6e38c3-4e4b-40a9-807c-ef109a7d8fa9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da03beb301f249883263681f514a76660d629d9a2ac1c5f7a617849fa5fa79b2,PodSandboxId:0cc142309a9466223545e694a8b063430eaf93b96c64519851e000dc0d8de32b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763588877836499654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nx8qh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c98e5734-a3cf-4fca-87a4-a5e427ad8692,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2a4495ce98248f19f1e49837a7b77c4a395d8454c8746a37f1d8d0ce08df56a,PodSandboxId:313a60b3ae5d657b02969cfe587b64bee082039021a3f3b6e8e0bcd97d070b08,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763588866123992513,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-638975,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dd72cbeb36a02d4531bf8b74556b1fd,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"con
tainerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b1f66be30c130fa3cf7ccea2507b3b9249bc1abe5cbeb392232b831794bc16,PodSandboxId:0ffbbee74a54473f1f9cd4832b98b01f492ead4d8e8082e8dee6ddb79ed92817,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763588866076351870,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-638975,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69f2c036dfa51e9e10e93df7eede1456,},Annotations:map[string]string{io.kubernetes.
container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05e93e46e539f9e049bb5a343874bd4d16206b184c7b50b2f12d0849969f58ef,PodSandboxId:a5c48dcd50c2af73a16ad1d85cc1995b7437e43bea4c8aca1f974495d3c9994b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763588866035103050,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-638975,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: c195edc3593133636d44b8155a2981e1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d971002aa762deedd6285a5d5c86fd82cec31a15b577c574cfc672249eea6e71,PodSandboxId:532a7940fcf0d74d9782f0ba87b02ee54b434fe8eaf9def515c1ca0dd85016cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763588866021177024,Labels:map[string]string{io.kubernetes.
container.name: etcd,io.kubernetes.pod.name: etcd-addons-638975,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eacb67bade8b9c41c936f489e620e057,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e3e2276d-16ea-4923-a1cf-faf2b3d50055 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 21:52:26 addons-638975 crio[819]: time="2025-11-19 21:52:26.997125823Z" level=debug msg="Request: &StartContainerRequest{ContainerId:e0c109654999ffe8c854cbbe97b62775a61350443315fde4b0f006e2cf2b6dd5,}" file="otel-collector/interceptors.go:62" id=e28c8d1b-227e-47e5-bc4e-88096fa0661f name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 21:52:26 addons-638975 crio[819]: time="2025-11-19 21:52:26.997343019Z" level=info msg="Starting container: e0c109654999ffe8c854cbbe97b62775a61350443315fde4b0f006e2cf2b6dd5" file="server/container_start.go:21" id=e28c8d1b-227e-47e5-bc4e-88096fa0661f name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 21:52:27 addons-638975 crio[819]: time="2025-11-19 21:52:27.011661135Z" level=info msg="Started container" PID=12610 containerID=e0c109654999ffe8c854cbbe97b62775a61350443315fde4b0f006e2cf2b6dd5 description=default/hello-world-app-5d498dc89-pm4hf/hello-world-app file="server/container_start.go:115" id=e28c8d1b-227e-47e5-bc4e-88096fa0661f name=/runtime.v1.RuntimeService/StartContainer sandboxID=b3104f8c785376fda032dfe65afbdee09041cd396bb74ff9dd8420676c7ab1a1
	Nov 19 21:52:27 addons-638975 crio[819]: time="2025-11-19 21:52:27.032729895Z" level=debug msg="Response: &StartContainerResponse{}" file="otel-collector/interceptors.go:74" id=e28c8d1b-227e-47e5-bc4e-88096fa0661f name=/runtime.v1.RuntimeService/StartContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	e0c109654999f       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   b3104f8c78537       hello-world-app-5d498dc89-pm4hf
	8371375202c66       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                              2 minutes ago            Running             nginx                     0                   61c815bd8c320       nginx
	5d489e6f165a5       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago            Running             busybox                   0                   4e1174c032534       busybox
	06c29341c6b00       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27             3 minutes ago            Running             controller                0                   bb7bc046135b6       ingress-nginx-controller-6c8bf45fb-rzmcn
	eadf20b943f06       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                             3 minutes ago            Exited              patch                     2                   8a873ee3d65f0       ingress-nginx-admission-patch-x46tz
	c2e2aea509860       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f   3 minutes ago            Exited              create                    0                   42e5833f4347c       ingress-nginx-admission-create-647cq
	27e4c4c87ea70       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               3 minutes ago            Running             minikube-ingress-dns      0                   cc1f84290fff4       kube-ingress-dns-minikube
	ffff7f5673962       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago            Running             storage-provisioner       0                   75dca618f513e       storage-provisioner
	7f43d5284aa6e       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago            Running             amd-gpu-device-plugin     0                   849e6a185adc8       amd-gpu-device-plugin-tdz7q
	e0d4d921d0924       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago            Running             coredns                   0                   90965dd394974       coredns-66bc5c9577-8kzpl
	da03beb301f24       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             4 minutes ago            Running             kube-proxy                0                   0cc142309a946       kube-proxy-nx8qh
	b2a4495ce9824       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             4 minutes ago            Running             kube-scheduler            0                   313a60b3ae5d6       kube-scheduler-addons-638975
	35b1f66be30c1       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             4 minutes ago            Running             kube-apiserver            0                   0ffbbee74a544       kube-apiserver-addons-638975
	05e93e46e539f       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             4 minutes ago            Running             kube-controller-manager   0                   a5c48dcd50c2a       kube-controller-manager-addons-638975
	d971002aa762d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             4 minutes ago            Running             etcd                      0                   532a7940fcf0d       etcd-addons-638975
	
	
	==> coredns [e0d4d921d09244b73a0fda09d47485b20ccdb04377d0c913f413bfe458eed3f0] <==
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	[INFO] Reloading complete
	[INFO] 127.0.0.1:36880 - 63328 "HINFO IN 7964174294087714544.6290242710474935174. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.045521356s
	[INFO] 10.244.0.23:37748 - 64611 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000302006s
	[INFO] 10.244.0.23:52423 - 26031 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000423371s
	[INFO] 10.244.0.23:41808 - 55839 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000096835s
	[INFO] 10.244.0.23:49997 - 32005 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000201395s
	[INFO] 10.244.0.23:55275 - 7730 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000077994s
	[INFO] 10.244.0.23:48380 - 49758 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000218356s
	[INFO] 10.244.0.23:45139 - 22757 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001505405s
	[INFO] 10.244.0.23:40745 - 13048 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00391431s
	[INFO] 10.244.0.27:36092 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.001039962s
	[INFO] 10.244.0.27:48895 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000171967s
	
	
	==> describe nodes <==
	Name:               addons-638975
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-638975
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=addons-638975
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T21_47_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-638975
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 21:47:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-638975
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 21:52:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 21:50:25 +0000   Wed, 19 Nov 2025 21:47:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 21:50:25 +0000   Wed, 19 Nov 2025 21:47:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 21:50:25 +0000   Wed, 19 Nov 2025 21:47:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 21:50:25 +0000   Wed, 19 Nov 2025 21:47:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    addons-638975
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2f6eefac91944ed8f0e857a5c1f0052
	  System UUID:                d2f6eefa-c919-44ed-8f0e-857a5c1f0052
	  Boot ID:                    84b918ae-dc58-45d5-ae4d-8f93a20b8004
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s
	  default                     hello-world-app-5d498dc89-pm4hf             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-rzmcn    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m21s
	  kube-system                 amd-gpu-device-plugin-tdz7q                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 coredns-66bc5c9577-8kzpl                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m30s
	  kube-system                 etcd-addons-638975                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m35s
	  kube-system                 kube-apiserver-addons-638975                250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 kube-controller-manager-addons-638975       200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 kube-proxy-nx8qh                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 kube-scheduler-addons-638975                100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m28s                  kube-proxy       
	  Normal  Starting                 4m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m42s (x8 over 4m42s)  kubelet          Node addons-638975 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m42s (x8 over 4m42s)  kubelet          Node addons-638975 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m42s (x7 over 4m42s)  kubelet          Node addons-638975 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m36s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m35s                  kubelet          Node addons-638975 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m35s                  kubelet          Node addons-638975 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m35s                  kubelet          Node addons-638975 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m35s                  kubelet          Node addons-638975 status is now: NodeReady
	  Normal  RegisteredNode           4m32s                  node-controller  Node addons-638975 event: Registered Node addons-638975 in Controller
	
	
	==> dmesg <==
	[Nov19 21:48] kauditd_printk_skb: 312 callbacks suppressed
	[  +0.289761] kauditd_printk_skb: 257 callbacks suppressed
	[  +3.699765] kauditd_printk_skb: 446 callbacks suppressed
	[  +5.202344] kauditd_printk_skb: 5 callbacks suppressed
	[  +8.641735] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.504662] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.541484] kauditd_printk_skb: 26 callbacks suppressed
	[  +3.595084] kauditd_printk_skb: 107 callbacks suppressed
	[  +5.084332] kauditd_printk_skb: 93 callbacks suppressed
	[  +5.067578] kauditd_printk_skb: 100 callbacks suppressed
	[Nov19 21:49] kauditd_printk_skb: 90 callbacks suppressed
	[  +0.000219] kauditd_printk_skb: 65 callbacks suppressed
	[  +5.197975] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.663833] kauditd_printk_skb: 47 callbacks suppressed
	[  +9.371142] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.957607] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.931265] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.335580] kauditd_printk_skb: 138 callbacks suppressed
	[  +0.777556] kauditd_printk_skb: 170 callbacks suppressed
	[Nov19 21:50] kauditd_printk_skb: 175 callbacks suppressed
	[  +4.408706] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.917393] kauditd_printk_skb: 30 callbacks suppressed
	[ +10.244503] kauditd_printk_skb: 42 callbacks suppressed
	[  +6.873481] kauditd_printk_skb: 61 callbacks suppressed
	[Nov19 21:52] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [d971002aa762deedd6285a5d5c86fd82cec31a15b577c574cfc672249eea6e71] <==
	{"level":"info","ts":"2025-11-19T21:49:15.403889Z","caller":"traceutil/trace.go:172","msg":"trace[1703868152] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1180; }","duration":"198.801876ms","start":"2025-11-19T21:49:15.205079Z","end":"2025-11-19T21:49:15.403881Z","steps":["trace[1703868152] 'agreement among raft nodes before linearized reading'  (duration: 198.71851ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T21:49:15.403986Z","caller":"traceutil/trace.go:172","msg":"trace[401809713] transaction","detail":"{read_only:false; response_revision:1181; number_of_response:1; }","duration":"203.320718ms","start":"2025-11-19T21:49:15.200654Z","end":"2025-11-19T21:49:15.403975Z","steps":["trace[401809713] 'process raft request'  (duration: 203.195017ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T21:49:48.006212Z","caller":"traceutil/trace.go:172","msg":"trace[1839948641] linearizableReadLoop","detail":"{readStateIndex:1405; appliedIndex:1405; }","duration":"343.65839ms","start":"2025-11-19T21:49:47.662536Z","end":"2025-11-19T21:49:48.006195Z","steps":["trace[1839948641] 'read index received'  (duration: 343.650557ms)","trace[1839948641] 'applied index is now lower than readState.Index'  (duration: 6.722ยตs)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T21:49:48.006342Z","caller":"traceutil/trace.go:172","msg":"trace[386573809] transaction","detail":"{read_only:false; response_revision:1364; number_of_response:1; }","duration":"393.066463ms","start":"2025-11-19T21:49:47.613265Z","end":"2025-11-19T21:49:48.006331Z","steps":["trace[386573809] 'process raft request'  (duration: 392.958445ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T21:49:48.006362Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"343.809091ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-19T21:49:48.006441Z","caller":"traceutil/trace.go:172","msg":"trace[411230460] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1364; }","duration":"343.901801ms","start":"2025-11-19T21:49:47.662533Z","end":"2025-11-19T21:49:48.006435Z","steps":["trace[411230460] 'agreement among raft nodes before linearized reading'  (duration: 343.781735ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T21:49:48.006469Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-19T21:49:47.662515Z","time spent":"343.944593ms","remote":"127.0.0.1:56126","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-11-19T21:49:48.006502Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-19T21:49:47.613243Z","time spent":"393.201311ms","remote":"127.0.0.1:56096","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1362 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-11-19T21:49:48.006750Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"233.620897ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:8"}
	{"level":"info","ts":"2025-11-19T21:49:48.006773Z","caller":"traceutil/trace.go:172","msg":"trace[128459246] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:1364; }","duration":"233.646647ms","start":"2025-11-19T21:49:47.773120Z","end":"2025-11-19T21:49:48.006767Z","steps":["trace[128459246] 'agreement among raft nodes before linearized reading'  (duration: 233.498366ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T21:49:48.006886Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"221.335972ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-19T21:49:48.006901Z","caller":"traceutil/trace.go:172","msg":"trace[545279173] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1364; }","duration":"221.351053ms","start":"2025-11-19T21:49:47.785544Z","end":"2025-11-19T21:49:48.006895Z","steps":["trace[545279173] 'agreement among raft nodes before linearized reading'  (duration: 221.324045ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T21:49:48.006974Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"222.615107ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-11-19T21:49:48.007083Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"232.749383ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-19T21:49:48.007039Z","caller":"traceutil/trace.go:172","msg":"trace[1518675831] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1364; }","duration":"222.629881ms","start":"2025-11-19T21:49:47.784353Z","end":"2025-11-19T21:49:48.006983Z","steps":["trace[1518675831] 'agreement among raft nodes before linearized reading'  (duration: 222.607267ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T21:49:48.007104Z","caller":"traceutil/trace.go:172","msg":"trace[2140529718] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1364; }","duration":"232.772519ms","start":"2025-11-19T21:49:47.774326Z","end":"2025-11-19T21:49:48.007099Z","steps":["trace[2140529718] 'agreement among raft nodes before linearized reading'  (duration: 232.740655ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T21:50:08.101644Z","caller":"traceutil/trace.go:172","msg":"trace[1211681118] linearizableReadLoop","detail":"{readStateIndex:1650; appliedIndex:1650; }","duration":"187.12708ms","start":"2025-11-19T21:50:07.914484Z","end":"2025-11-19T21:50:08.101611Z","steps":["trace[1211681118] 'read index received'  (duration: 187.120546ms)","trace[1211681118] 'applied index is now lower than readState.Index'  (duration: 5.037ยตs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T21:50:08.101861Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"187.360765ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-19T21:50:08.101873Z","caller":"traceutil/trace.go:172","msg":"trace[1561890763] transaction","detail":"{read_only:false; response_revision:1598; number_of_response:1; }","duration":"414.147107ms","start":"2025-11-19T21:50:07.687717Z","end":"2025-11-19T21:50:08.101864Z","steps":["trace[1561890763] 'process raft request'  (duration: 414.021928ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T21:50:08.101958Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-19T21:50:07.687697Z","time spent":"414.212516ms","remote":"127.0.0.1:56126","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4422,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/local-path-storage/helper-pod-delete-pvc-2210aff6-240c-446f-aed2-60b4ee919562\" mod_revision:1589 > success:<request_put:<key:\"/registry/pods/local-path-storage/helper-pod-delete-pvc-2210aff6-240c-446f-aed2-60b4ee919562\" value_size:4322 >> failure:<request_range:<key:\"/registry/pods/local-path-storage/helper-pod-delete-pvc-2210aff6-240c-446f-aed2-60b4ee919562\" > >"}
	{"level":"info","ts":"2025-11-19T21:50:08.101886Z","caller":"traceutil/trace.go:172","msg":"trace[918497272] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1597; }","duration":"187.423848ms","start":"2025-11-19T21:50:07.914457Z","end":"2025-11-19T21:50:08.101881Z","steps":["trace[918497272] 'agreement among raft nodes before linearized reading'  (duration: 187.335473ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T21:50:15.967163Z","caller":"traceutil/trace.go:172","msg":"trace[98021308] linearizableReadLoop","detail":"{readStateIndex:1699; appliedIndex:1699; }","duration":"155.485698ms","start":"2025-11-19T21:50:15.811660Z","end":"2025-11-19T21:50:15.967146Z","steps":["trace[98021308] 'read index received'  (duration: 155.477227ms)","trace[98021308] 'applied index is now lower than readState.Index'  (duration: 7.3ยตs)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T21:50:15.967317Z","caller":"traceutil/trace.go:172","msg":"trace[1684857849] transaction","detail":"{read_only:false; response_revision:1645; number_of_response:1; }","duration":"260.610474ms","start":"2025-11-19T21:50:15.706696Z","end":"2025-11-19T21:50:15.967306Z","steps":["trace[1684857849] 'process raft request'  (duration: 260.488204ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T21:50:15.969173Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.826641ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry-creds\" limit:1 ","response":"range_response_count:1 size:7614"}
	{"level":"info","ts":"2025-11-19T21:50:15.969277Z","caller":"traceutil/trace.go:172","msg":"trace[119696489] range","detail":"{range_begin:/registry/deployments/kube-system/registry-creds; range_end:; response_count:1; response_revision:1645; }","duration":"157.633613ms","start":"2025-11-19T21:50:15.811633Z","end":"2025-11-19T21:50:15.969267Z","steps":["trace[119696489] 'agreement among raft nodes before linearized reading'  (duration: 155.733072ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:52:27 up 5 min,  0 users,  load average: 1.14, 1.41, 0.73
	Linux addons-638975 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 21:15:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [35b1f66be30c130fa3cf7ccea2507b3b9249bc1abe5cbeb392232b831794bc16] <==
	E1119 21:48:32.977541       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.250.141:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.250.141:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.250.141:443: connect: connection refused" logger="UnhandledError"
	E1119 21:48:32.983697       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.250.141:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.250.141:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.250.141:443: connect: connection refused" logger="UnhandledError"
	I1119 21:48:33.115093       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1119 21:49:34.218829       1 conn.go:339] Error on socket receive: read tcp 192.168.39.215:8443->192.168.39.1:34232: use of closed network connection
	E1119 21:49:34.416642       1 conn.go:339] Error on socket receive: read tcp 192.168.39.215:8443->192.168.39.1:34272: use of closed network connection
	I1119 21:49:43.593894       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.195.159"}
	I1119 21:49:59.984620       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1119 21:50:00.259105       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.225.142"}
	I1119 21:50:17.591114       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1119 21:50:17.820628       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1119 21:50:33.990028       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1119 21:50:43.439181       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1119 21:50:43.439243       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1119 21:50:43.484149       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1119 21:50:43.484190       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1119 21:50:43.485779       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1119 21:50:43.485913       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1119 21:50:43.523643       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1119 21:50:43.523700       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1119 21:50:43.544652       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1119 21:50:43.544826       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1119 21:50:44.486707       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1119 21:50:44.545677       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1119 21:50:44.683908       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1119 21:52:25.653482       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.246.175"}
	
	
	==> kube-controller-manager [05e93e46e539f9e049bb5a343874bd4d16206b184c7b50b2f12d0849969f58ef] <==
	E1119 21:50:53.999615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1119 21:50:56.223141       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1119 21:50:56.223179       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 21:50:56.376298       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1119 21:50:56.376340       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1119 21:50:59.272206       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1119 21:50:59.273179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1119 21:51:02.754966       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1119 21:51:02.756210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1119 21:51:02.988084       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1119 21:51:02.989132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1119 21:51:17.172771       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1119 21:51:17.174105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1119 21:51:18.360231       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1119 21:51:18.361514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1119 21:51:20.739851       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1119 21:51:20.741034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1119 21:51:44.310638       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1119 21:51:44.311934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1119 21:51:47.671472       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1119 21:51:47.672534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1119 21:51:50.929905       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1119 21:51:50.930913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1119 21:52:16.128690       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1119 21:52:16.130463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [da03beb301f249883263681f514a76660d629d9a2ac1c5f7a617849fa5fa79b2] <==
	I1119 21:47:58.800110       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 21:47:58.901888       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 21:47:58.901940       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.215"]
	E1119 21:47:58.902060       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 21:47:59.060034       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1119 21:47:59.060511       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1119 21:47:59.060550       1 server_linux.go:132] "Using iptables Proxier"
	I1119 21:47:59.076373       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 21:47:59.077339       1 server.go:527] "Version info" version="v1.34.1"
	I1119 21:47:59.077371       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 21:47:59.087622       1 config.go:200] "Starting service config controller"
	I1119 21:47:59.087655       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 21:47:59.087675       1 config.go:106] "Starting endpoint slice config controller"
	I1119 21:47:59.087679       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 21:47:59.087690       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 21:47:59.087693       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 21:47:59.091205       1 config.go:309] "Starting node config controller"
	I1119 21:47:59.092465       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 21:47:59.187824       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 21:47:59.187874       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 21:47:59.187984       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 21:47:59.193353       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [b2a4495ce98248f19f1e49837a7b77c4a395d8454c8746a37f1d8d0ce08df56a] <==
	E1119 21:47:49.012738       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 21:47:49.016360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1119 21:47:49.017306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 21:47:49.017348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 21:47:49.017456       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 21:47:49.017496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 21:47:49.835719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 21:47:49.944194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 21:47:49.971778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 21:47:50.027721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 21:47:50.031371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 21:47:50.052600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 21:47:50.055688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 21:47:50.075685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 21:47:50.108770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 21:47:50.153866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1119 21:47:50.171320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 21:47:50.239310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 21:47:50.245886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 21:47:50.298560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 21:47:50.312146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 21:47:50.361009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 21:47:50.362269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 21:47:50.398667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1119 21:47:53.191743       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 21:50:53 addons-638975 kubelet[1500]: I1119 21:50:53.330760    1500 scope.go:117] "RemoveContainer" containerID="d0152cc8479e7e1415b1267598bfcef45b492e23258e0592213c4305e83375de"
	Nov 19 21:50:53 addons-638975 kubelet[1500]: I1119 21:50:53.448903    1500 scope.go:117] "RemoveContainer" containerID="4b67303da5f089c06986ec9fb28fe2a09e39514bced9a389f9d49f648e650b25"
	Nov 19 21:50:59 addons-638975 kubelet[1500]: I1119 21:50:59.068298    1500 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 21:51:02 addons-638975 kubelet[1500]: E1119 21:51:02.552898    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763589062552314062  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 19 21:51:02 addons-638975 kubelet[1500]: E1119 21:51:02.552951    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763589062552314062  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 19 21:51:12 addons-638975 kubelet[1500]: E1119 21:51:12.556704    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763589072556067061  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 19 21:51:12 addons-638975 kubelet[1500]: E1119 21:51:12.557171    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763589072556067061  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 19 21:51:22 addons-638975 kubelet[1500]: E1119 21:51:22.561248    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763589082560577966  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 19 21:51:22 addons-638975 kubelet[1500]: E1119 21:51:22.561319    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763589082560577966  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 19 21:51:32 addons-638975 kubelet[1500]: E1119 21:51:32.566680    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763589092565618408  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 19 21:51:32 addons-638975 kubelet[1500]: E1119 21:51:32.566836    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763589092565618408  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 19 21:51:41 addons-638975 kubelet[1500]: I1119 21:51:41.068081    1500 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-tdz7q" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 21:51:42 addons-638975 kubelet[1500]: E1119 21:51:42.570370    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763589102569862465  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 19 21:51:42 addons-638975 kubelet[1500]: E1119 21:51:42.570455    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763589102569862465  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 19 21:51:52 addons-638975 kubelet[1500]: E1119 21:51:52.573019    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763589112572529799  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 19 21:51:52 addons-638975 kubelet[1500]: E1119 21:51:52.573047    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763589112572529799  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 19 21:52:02 addons-638975 kubelet[1500]: E1119 21:52:02.577122    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763589122576069883  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 19 21:52:02 addons-638975 kubelet[1500]: E1119 21:52:02.577222    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763589122576069883  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 19 21:52:12 addons-638975 kubelet[1500]: E1119 21:52:12.579867    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763589132579289700  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 19 21:52:12 addons-638975 kubelet[1500]: E1119 21:52:12.579903    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763589132579289700  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 19 21:52:20 addons-638975 kubelet[1500]: I1119 21:52:20.068995    1500 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 21:52:22 addons-638975 kubelet[1500]: E1119 21:52:22.583491    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763589142582897009  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 19 21:52:22 addons-638975 kubelet[1500]: E1119 21:52:22.583523    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763589142582897009  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588596}  inodes_used:{value:201}}"
	Nov 19 21:52:25 addons-638975 kubelet[1500]: I1119 21:52:25.660167    1500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb8zr\" (UniqueName: \"kubernetes.io/projected/607558a0-5f49-49fc-a9df-452ab38f3178-kube-api-access-rb8zr\") pod \"hello-world-app-5d498dc89-pm4hf\" (UID: \"607558a0-5f49-49fc-a9df-452ab38f3178\") " pod="default/hello-world-app-5d498dc89-pm4hf"
	Nov 19 21:52:27 addons-638975 kubelet[1500]: I1119 21:52:27.236623    1500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-pm4hf" podStartSLOduration=1.5481922780000001 podStartE2EDuration="2.236532393s" podCreationTimestamp="2025-11-19 21:52:25 +0000 UTC" firstStartedPulling="2025-11-19 21:52:26.204452332 +0000 UTC m=+274.298584503" lastFinishedPulling="2025-11-19 21:52:26.892792451 +0000 UTC m=+274.986924618" observedRunningTime="2025-11-19 21:52:27.233021931 +0000 UTC m=+275.327154106" watchObservedRunningTime="2025-11-19 21:52:27.236532393 +0000 UTC m=+275.330664577"
	
	
	==> storage-provisioner [ffff7f567396240d424f64bb45cde10f45948f6470abda0bfe80eb132c0555aa] <==
	W1119 21:52:02.962828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:04.967887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:04.974746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:06.978186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:06.984245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:08.988126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:08.995013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:11.006123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:11.019067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:13.026710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:13.033268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:15.036376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:15.042139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:17.046337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:17.053162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:19.057135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:19.066189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:21.071090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:21.079992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:23.084617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:23.090732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:25.094092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:25.101995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:27.106295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:27.116827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-638975 -n addons-638975
helpers_test.go:269: (dbg) Run:  kubectl --context addons-638975 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-647cq ingress-nginx-admission-patch-x46tz
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-638975 describe pod ingress-nginx-admission-create-647cq ingress-nginx-admission-patch-x46tz
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-638975 describe pod ingress-nginx-admission-create-647cq ingress-nginx-admission-patch-x46tz: exit status 1 (61.068554ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-647cq" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-x46tz" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-638975 describe pod ingress-nginx-admission-create-647cq ingress-nginx-admission-patch-x46tz: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-638975 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-638975 addons disable ingress-dns --alsologtostderr -v=1: (1.092754875s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-638975 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-638975 addons disable ingress --alsologtostderr -v=1: (7.793420364s)
--- FAIL: TestAddons/parallel/Ingress (157.41s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (1577.04s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1119 21:56:15.467240  121369 config.go:182] Loaded profile config "functional-274272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-274272 --alsologtostderr -v=8
E1119 21:57:08.968265  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:59:25.093713  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:59:52.810281  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:04:25.102771  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:09:25.094443  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-274272 --alsologtostderr -v=8: exit status 80 (13m55.558217498s)

                                                
                                                
-- stdout --
	* [functional-274272] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21918
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21918-117497/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-117497/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "functional-274272" primary control-plane node in "functional-274272" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:56:15.524505  125655 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:56:15.524640  125655 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:56:15.524650  125655 out.go:374] Setting ErrFile to fd 2...
	I1119 21:56:15.524653  125655 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:56:15.524902  125655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-117497/.minikube/bin
	I1119 21:56:15.525357  125655 out.go:368] Setting JSON to false
	I1119 21:56:15.526238  125655 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":13122,"bootTime":1763576253,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 21:56:15.526344  125655 start.go:143] virtualization: kvm guest
	I1119 21:56:15.529220  125655 out.go:179] * [functional-274272] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 21:56:15.530888  125655 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 21:56:15.530892  125655 notify.go:221] Checking for updates...
	I1119 21:56:15.533320  125655 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 21:56:15.534592  125655 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 21:56:15.535896  125655 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-117497/.minikube
	I1119 21:56:15.537284  125655 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 21:56:15.538692  125655 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 21:56:15.540498  125655 config.go:182] Loaded profile config "functional-274272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:56:15.540627  125655 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 21:56:15.578138  125655 out.go:179] * Using the kvm2 driver based on existing profile
	I1119 21:56:15.579740  125655 start.go:309] selected driver: kvm2
	I1119 21:56:15.579759  125655 start.go:930] validating driver "kvm2" against &{Name:functional-274272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-274272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:56:15.579860  125655 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 21:56:15.580973  125655 cni.go:84] Creating CNI manager for ""
	I1119 21:56:15.581058  125655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1119 21:56:15.581134  125655 start.go:353] cluster config:
	{Name:functional-274272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-274272 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:56:15.581282  125655 iso.go:125] acquiring lock: {Name:mk95e5a645bfae75190ef550e02bd4f48b331040 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 21:56:15.582970  125655 out.go:179] * Starting "functional-274272" primary control-plane node in "functional-274272" cluster
	I1119 21:56:15.584343  125655 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 21:56:15.584377  125655 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 21:56:15.584386  125655 cache.go:65] Caching tarball of preloaded images
	I1119 21:56:15.584490  125655 preload.go:238] Found /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 21:56:15.584505  125655 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 21:56:15.584592  125655 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/config.json ...
	I1119 21:56:15.584871  125655 start.go:360] acquireMachinesLock for functional-274272: {Name:mk31ae761a53f3900bcf08a8c89eb1fc69e23fe3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1119 21:56:15.584938  125655 start.go:364] duration metric: took 31.116ยตs to acquireMachinesLock for "functional-274272"
	I1119 21:56:15.584961  125655 start.go:96] Skipping create...Using existing machine configuration
	I1119 21:56:15.584971  125655 fix.go:54] fixHost starting: 
	I1119 21:56:15.587127  125655 fix.go:112] recreateIfNeeded on functional-274272: state=Running err=<nil>
	W1119 21:56:15.587160  125655 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 21:56:15.589621  125655 out.go:252] * Updating the running kvm2 "functional-274272" VM ...
	I1119 21:56:15.589658  125655 machine.go:94] provisionDockerMachine start ...
	I1119 21:56:15.592549  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:15.593154  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:15.593187  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:15.593360  125655 main.go:143] libmachine: Using SSH client type: native
	I1119 21:56:15.593603  125655 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1119 21:56:15.593618  125655 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 21:56:15.702194  125655 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-274272
	
	I1119 21:56:15.702244  125655 buildroot.go:166] provisioning hostname "functional-274272"
	I1119 21:56:15.705141  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:15.705571  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:15.705614  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:15.705842  125655 main.go:143] libmachine: Using SSH client type: native
	I1119 21:56:15.706110  125655 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1119 21:56:15.706125  125655 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-274272 && echo "functional-274272" | sudo tee /etc/hostname
	I1119 21:56:15.846160  125655 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-274272
	
	I1119 21:56:15.849601  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:15.850076  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:15.850116  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:15.850306  125655 main.go:143] libmachine: Using SSH client type: native
	I1119 21:56:15.850538  125655 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1119 21:56:15.850562  125655 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-274272' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-274272/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-274272' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 21:56:15.958572  125655 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 21:56:15.958602  125655 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21918-117497/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-117497/.minikube}
	I1119 21:56:15.958621  125655 buildroot.go:174] setting up certificates
	I1119 21:56:15.958644  125655 provision.go:84] configureAuth start
	I1119 21:56:15.961541  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:15.961948  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:15.961978  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:15.964387  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:15.964833  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:15.964860  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:15.965011  125655 provision.go:143] copyHostCerts
	I1119 21:56:15.965045  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 21:56:15.965088  125655 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem, removing ...
	I1119 21:56:15.965106  125655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 21:56:15.965186  125655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem (1123 bytes)
	I1119 21:56:15.965327  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 21:56:15.965363  125655 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem, removing ...
	I1119 21:56:15.965371  125655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 21:56:15.965420  125655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem (1675 bytes)
	I1119 21:56:15.965509  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 21:56:15.965533  125655 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem, removing ...
	I1119 21:56:15.965543  125655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 21:56:15.965592  125655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem (1078 bytes)
	I1119 21:56:15.965675  125655 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem org=jenkins.functional-274272 san=[127.0.0.1 192.168.39.56 functional-274272 localhost minikube]
	I1119 21:56:16.178107  125655 provision.go:177] copyRemoteCerts
	I1119 21:56:16.178177  125655 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 21:56:16.180523  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:16.180929  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:16.180960  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:16.181094  125655 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/functional-274272/id_rsa Username:docker}
	I1119 21:56:16.267429  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1119 21:56:16.267516  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 21:56:16.303049  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1119 21:56:16.303134  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 21:56:16.336134  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1119 21:56:16.336220  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 21:56:16.369360  125655 provision.go:87] duration metric: took 410.702355ms to configureAuth
	I1119 21:56:16.369395  125655 buildroot.go:189] setting minikube options for container-runtime
	I1119 21:56:16.369609  125655 config.go:182] Loaded profile config "functional-274272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:56:16.372543  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:16.372941  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:16.372970  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:16.373148  125655 main.go:143] libmachine: Using SSH client type: native
	I1119 21:56:16.373382  125655 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1119 21:56:16.373404  125655 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 21:56:21.981912  125655 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 21:56:21.981950  125655 machine.go:97] duration metric: took 6.392282192s to provisionDockerMachine
	I1119 21:56:21.981967  125655 start.go:293] postStartSetup for "functional-274272" (driver="kvm2")
	I1119 21:56:21.981980  125655 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 21:56:21.982049  125655 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 21:56:21.985113  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:21.985484  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:21.985537  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:21.985749  125655 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/functional-274272/id_rsa Username:docker}
	I1119 21:56:22.102924  125655 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 21:56:22.114116  125655 command_runner.go:130] > NAME=Buildroot
	I1119 21:56:22.114134  125655 command_runner.go:130] > VERSION=2025.02-dirty
	I1119 21:56:22.114138  125655 command_runner.go:130] > ID=buildroot
	I1119 21:56:22.114143  125655 command_runner.go:130] > VERSION_ID=2025.02
	I1119 21:56:22.114148  125655 command_runner.go:130] > PRETTY_NAME="Buildroot 2025.02"
	I1119 21:56:22.114192  125655 info.go:137] Remote host: Buildroot 2025.02
	I1119 21:56:22.114211  125655 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/addons for local assets ...
	I1119 21:56:22.114270  125655 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/files for local assets ...
	I1119 21:56:22.114383  125655 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> 1213692.pem in /etc/ssl/certs
	I1119 21:56:22.114400  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /etc/ssl/certs/1213692.pem
	I1119 21:56:22.114498  125655 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/test/nested/copy/121369/hosts -> hosts in /etc/test/nested/copy/121369
	I1119 21:56:22.114510  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/test/nested/copy/121369/hosts -> /etc/test/nested/copy/121369/hosts
	I1119 21:56:22.114560  125655 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/121369
	I1119 21:56:22.154301  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 21:56:22.234489  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/test/nested/copy/121369/hosts --> /etc/test/nested/copy/121369/hosts (40 bytes)
	I1119 21:56:22.328918  125655 start.go:296] duration metric: took 346.928603ms for postStartSetup
	I1119 21:56:22.328975  125655 fix.go:56] duration metric: took 6.74400308s for fixHost
	I1119 21:56:22.332245  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:22.332719  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:22.332761  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:22.333032  125655 main.go:143] libmachine: Using SSH client type: native
	I1119 21:56:22.333335  125655 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1119 21:56:22.333355  125655 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1119 21:56:22.524275  125655 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763589382.515165407
	
	I1119 21:56:22.524306  125655 fix.go:216] guest clock: 1763589382.515165407
	I1119 21:56:22.524317  125655 fix.go:229] Guest: 2025-11-19 21:56:22.515165407 +0000 UTC Remote: 2025-11-19 21:56:22.328982326 +0000 UTC m=+6.856474824 (delta=186.183081ms)
	I1119 21:56:22.524340  125655 fix.go:200] guest clock delta is within tolerance: 186.183081ms
	I1119 21:56:22.524348  125655 start.go:83] releasing machines lock for "functional-274272", held for 6.939395313s
	I1119 21:56:22.527518  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:22.527977  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:22.528013  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:22.528866  125655 ssh_runner.go:195] Run: cat /version.json
	I1119 21:56:22.528919  125655 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 21:56:22.532219  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:22.532345  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:22.532671  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:22.532706  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:22.532818  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:22.532846  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:22.532915  125655 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/functional-274272/id_rsa Username:docker}
	I1119 21:56:22.533175  125655 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/functional-274272/id_rsa Username:docker}
	I1119 21:56:22.669778  125655 command_runner.go:130] > {"iso_version": "v1.37.0-1763575914-21918", "kicbase_version": "v0.0.48-1763561786-21918", "minikube_version": "v1.37.0", "commit": "425f5f15185086235ffd9f03de5624881b145800"}
	I1119 21:56:22.670044  125655 ssh_runner.go:195] Run: systemctl --version
	I1119 21:56:22.709748  125655 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1119 21:56:22.714621  125655 command_runner.go:130] > systemd 256 (256.7)
	I1119 21:56:22.714659  125655 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP -LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT -LIBARCHIVE
	I1119 21:56:22.714732  125655 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 21:56:22.954339  125655 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1119 21:56:22.974290  125655 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1119 21:56:22.977797  125655 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 21:56:22.977901  125655 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 21:56:23.008282  125655 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 21:56:23.008311  125655 start.go:496] detecting cgroup driver to use...
	I1119 21:56:23.008412  125655 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 21:56:23.105440  125655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 21:56:23.135903  125655 docker.go:218] disabling cri-docker service (if available) ...
	I1119 21:56:23.135971  125655 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 21:56:23.175334  125655 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 21:56:23.257801  125655 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 21:56:23.591204  125655 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 21:56:23.882315  125655 docker.go:234] disabling docker service ...
	I1119 21:56:23.882405  125655 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 21:56:23.921893  125655 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 21:56:23.944430  125655 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 21:56:24.230558  125655 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 21:56:24.528514  125655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 21:56:24.549079  125655 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 21:56:24.594053  125655 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1119 21:56:24.595417  125655 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 21:56:24.595501  125655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:56:24.619383  125655 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 21:56:24.619478  125655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:56:24.644219  125655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:56:24.664023  125655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:56:24.686767  125655 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 21:56:24.708545  125655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:56:24.732834  125655 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:56:24.757166  125655 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:56:24.777845  125655 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 21:56:24.796276  125655 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1119 21:56:24.796965  125655 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 21:56:24.817150  125655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 21:56:25.056155  125655 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 21:57:55.500242  125655 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.444033021s)
	I1119 21:57:55.500288  125655 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 21:57:55.500356  125655 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 21:57:55.507439  125655 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1119 21:57:55.507472  125655 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1119 21:57:55.507489  125655 command_runner.go:130] > Device: 0,23	Inode: 1960        Links: 1
	I1119 21:57:55.507496  125655 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1119 21:57:55.507501  125655 command_runner.go:130] > Access: 2025-11-19 21:57:55.296187903 +0000
	I1119 21:57:55.507518  125655 command_runner.go:130] > Modify: 2025-11-19 21:57:55.296187903 +0000
	I1119 21:57:55.507523  125655 command_runner.go:130] > Change: 2025-11-19 21:57:55.296187903 +0000
	I1119 21:57:55.507528  125655 command_runner.go:130] >  Birth: 2025-11-19 21:57:55.296187903 +0000
	I1119 21:57:55.507551  125655 start.go:564] Will wait 60s for crictl version
	I1119 21:57:55.507616  125655 ssh_runner.go:195] Run: which crictl
	I1119 21:57:55.512454  125655 command_runner.go:130] > /usr/bin/crictl
	I1119 21:57:55.512630  125655 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1119 21:57:55.557330  125655 command_runner.go:130] > Version:  0.1.0
	I1119 21:57:55.557354  125655 command_runner.go:130] > RuntimeName:  cri-o
	I1119 21:57:55.557359  125655 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1119 21:57:55.557366  125655 command_runner.go:130] > RuntimeApiVersion:  v1
	I1119 21:57:55.557387  125655 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1119 21:57:55.557484  125655 ssh_runner.go:195] Run: crio --version
	I1119 21:57:55.589692  125655 command_runner.go:130] > crio version 1.29.1
	I1119 21:57:55.589714  125655 command_runner.go:130] > Version:        1.29.1
	I1119 21:57:55.589733  125655 command_runner.go:130] > GitCommit:      unknown
	I1119 21:57:55.589738  125655 command_runner.go:130] > GitCommitDate:  unknown
	I1119 21:57:55.589742  125655 command_runner.go:130] > GitTreeState:   clean
	I1119 21:57:55.589748  125655 command_runner.go:130] > BuildDate:      2025-11-19T21:18:08Z
	I1119 21:57:55.589752  125655 command_runner.go:130] > GoVersion:      go1.23.4
	I1119 21:57:55.589755  125655 command_runner.go:130] > Compiler:       gc
	I1119 21:57:55.589760  125655 command_runner.go:130] > Platform:       linux/amd64
	I1119 21:57:55.589763  125655 command_runner.go:130] > Linkmode:       dynamic
	I1119 21:57:55.589779  125655 command_runner.go:130] > BuildTags:      
	I1119 21:57:55.589785  125655 command_runner.go:130] >   containers_image_ostree_stub
	I1119 21:57:55.589789  125655 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1119 21:57:55.589793  125655 command_runner.go:130] >   btrfs_noversion
	I1119 21:57:55.589798  125655 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1119 21:57:55.589802  125655 command_runner.go:130] >   libdm_no_deferred_remove
	I1119 21:57:55.589809  125655 command_runner.go:130] >   seccomp
	I1119 21:57:55.589813  125655 command_runner.go:130] > LDFlags:          unknown
	I1119 21:57:55.589817  125655 command_runner.go:130] > SeccompEnabled:   true
	I1119 21:57:55.589824  125655 command_runner.go:130] > AppArmorEnabled:  false
	I1119 21:57:55.590824  125655 ssh_runner.go:195] Run: crio --version
	I1119 21:57:55.623734  125655 command_runner.go:130] > crio version 1.29.1
	I1119 21:57:55.623758  125655 command_runner.go:130] > Version:        1.29.1
	I1119 21:57:55.623767  125655 command_runner.go:130] > GitCommit:      unknown
	I1119 21:57:55.623773  125655 command_runner.go:130] > GitCommitDate:  unknown
	I1119 21:57:55.623778  125655 command_runner.go:130] > GitTreeState:   clean
	I1119 21:57:55.623785  125655 command_runner.go:130] > BuildDate:      2025-11-19T21:18:08Z
	I1119 21:57:55.623791  125655 command_runner.go:130] > GoVersion:      go1.23.4
	I1119 21:57:55.623797  125655 command_runner.go:130] > Compiler:       gc
	I1119 21:57:55.623803  125655 command_runner.go:130] > Platform:       linux/amd64
	I1119 21:57:55.623808  125655 command_runner.go:130] > Linkmode:       dynamic
	I1119 21:57:55.623815  125655 command_runner.go:130] > BuildTags:      
	I1119 21:57:55.623822  125655 command_runner.go:130] >   containers_image_ostree_stub
	I1119 21:57:55.623832  125655 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1119 21:57:55.623838  125655 command_runner.go:130] >   btrfs_noversion
	I1119 21:57:55.623847  125655 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1119 21:57:55.623868  125655 command_runner.go:130] >   libdm_no_deferred_remove
	I1119 21:57:55.623897  125655 command_runner.go:130] >   seccomp
	I1119 21:57:55.623907  125655 command_runner.go:130] > LDFlags:          unknown
	I1119 21:57:55.623914  125655 command_runner.go:130] > SeccompEnabled:   true
	I1119 21:57:55.623922  125655 command_runner.go:130] > AppArmorEnabled:  false
	I1119 21:57:55.626580  125655 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1119 21:57:55.630696  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:57:55.631264  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:57:55.631302  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:57:55.631528  125655 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1119 21:57:55.636396  125655 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1119 21:57:55.636493  125655 kubeadm.go:884] updating cluster {Name:functional-274272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:functional-274272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 21:57:55.636629  125655 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 21:57:55.636691  125655 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 21:57:55.684104  125655 command_runner.go:130] > {
	I1119 21:57:55.684131  125655 command_runner.go:130] >   "images": [
	I1119 21:57:55.684137  125655 command_runner.go:130] >     {
	I1119 21:57:55.684148  125655 command_runner.go:130] >       "id": "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1119 21:57:55.684155  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.684165  125655 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1119 21:57:55.684170  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684176  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.684188  125655 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1119 21:57:55.684199  125655 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1119 21:57:55.684209  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684215  125655 command_runner.go:130] >       "size": "109379124",
	I1119 21:57:55.684222  125655 command_runner.go:130] >       "uid": null,
	I1119 21:57:55.684231  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.684259  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.684270  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.684275  125655 command_runner.go:130] >     },
	I1119 21:57:55.684280  125655 command_runner.go:130] >     {
	I1119 21:57:55.684290  125655 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1119 21:57:55.684299  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.684308  125655 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1119 21:57:55.684317  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684323  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.684344  125655 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1119 21:57:55.684360  125655 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1119 21:57:55.684366  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684377  125655 command_runner.go:130] >       "size": "31470524",
	I1119 21:57:55.684385  125655 command_runner.go:130] >       "uid": null,
	I1119 21:57:55.684392  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.684399  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.684407  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.684412  125655 command_runner.go:130] >     },
	I1119 21:57:55.684421  125655 command_runner.go:130] >     {
	I1119 21:57:55.684430  125655 command_runner.go:130] >       "id": "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1119 21:57:55.684439  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.684447  125655 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1119 21:57:55.684457  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684463  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.684478  125655 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1119 21:57:55.684492  125655 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1119 21:57:55.684501  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684507  125655 command_runner.go:130] >       "size": "76103547",
	I1119 21:57:55.684514  125655 command_runner.go:130] >       "uid": null,
	I1119 21:57:55.684521  125655 command_runner.go:130] >       "username": "nonroot",
	I1119 21:57:55.684530  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.684536  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.684544  125655 command_runner.go:130] >     },
	I1119 21:57:55.684551  125655 command_runner.go:130] >     {
	I1119 21:57:55.684561  125655 command_runner.go:130] >       "id": "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1119 21:57:55.684567  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.684578  125655 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1119 21:57:55.684584  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684591  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.684603  125655 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1119 21:57:55.684630  125655 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1119 21:57:55.684645  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684659  125655 command_runner.go:130] >       "size": "195976448",
	I1119 21:57:55.684669  125655 command_runner.go:130] >       "uid": {
	I1119 21:57:55.684675  125655 command_runner.go:130] >         "value": "0"
	I1119 21:57:55.684681  125655 command_runner.go:130] >       },
	I1119 21:57:55.684687  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.684693  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.684699  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.684704  125655 command_runner.go:130] >     },
	I1119 21:57:55.684719  125655 command_runner.go:130] >     {
	I1119 21:57:55.684731  125655 command_runner.go:130] >       "id": "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1119 21:57:55.684738  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.684745  125655 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1119 21:57:55.684753  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684759  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.684771  125655 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1119 21:57:55.684783  125655 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1119 21:57:55.684791  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684796  125655 command_runner.go:130] >       "size": "89046001",
	I1119 21:57:55.684802  125655 command_runner.go:130] >       "uid": {
	I1119 21:57:55.684808  125655 command_runner.go:130] >         "value": "0"
	I1119 21:57:55.684817  125655 command_runner.go:130] >       },
	I1119 21:57:55.684822  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.684830  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.684836  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.684844  125655 command_runner.go:130] >     },
	I1119 21:57:55.684849  125655 command_runner.go:130] >     {
	I1119 21:57:55.684860  125655 command_runner.go:130] >       "id": "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1119 21:57:55.684866  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.684898  125655 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1119 21:57:55.684908  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684914  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.684927  125655 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1119 21:57:55.684940  125655 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1119 21:57:55.684953  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684962  125655 command_runner.go:130] >       "size": "76004181",
	I1119 21:57:55.684968  125655 command_runner.go:130] >       "uid": {
	I1119 21:57:55.684976  125655 command_runner.go:130] >         "value": "0"
	I1119 21:57:55.684981  125655 command_runner.go:130] >       },
	I1119 21:57:55.684990  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.684995  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.685004  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.685009  125655 command_runner.go:130] >     },
	I1119 21:57:55.685015  125655 command_runner.go:130] >     {
	I1119 21:57:55.685025  125655 command_runner.go:130] >       "id": "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1119 21:57:55.685034  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.685041  125655 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1119 21:57:55.685049  125655 command_runner.go:130] >       ],
	I1119 21:57:55.685055  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.685069  125655 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1119 21:57:55.685081  125655 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1119 21:57:55.685090  125655 command_runner.go:130] >       ],
	I1119 21:57:55.685095  125655 command_runner.go:130] >       "size": "73138073",
	I1119 21:57:55.685104  125655 command_runner.go:130] >       "uid": null,
	I1119 21:57:55.685110  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.685119  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.685125  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.685134  125655 command_runner.go:130] >     },
	I1119 21:57:55.685140  125655 command_runner.go:130] >     {
	I1119 21:57:55.685151  125655 command_runner.go:130] >       "id": "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1119 21:57:55.685158  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.685166  125655 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1119 21:57:55.685174  125655 command_runner.go:130] >       ],
	I1119 21:57:55.685181  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.685213  125655 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1119 21:57:55.685226  125655 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1119 21:57:55.685240  125655 command_runner.go:130] >       ],
	I1119 21:57:55.685255  125655 command_runner.go:130] >       "size": "53844823",
	I1119 21:57:55.685264  125655 command_runner.go:130] >       "uid": {
	I1119 21:57:55.685270  125655 command_runner.go:130] >         "value": "0"
	I1119 21:57:55.685278  125655 command_runner.go:130] >       },
	I1119 21:57:55.685285  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.685294  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.685299  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.685307  125655 command_runner.go:130] >     },
	I1119 21:57:55.685311  125655 command_runner.go:130] >     {
	I1119 21:57:55.685322  125655 command_runner.go:130] >       "id": "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1119 21:57:55.685327  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.685334  125655 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1119 21:57:55.685339  125655 command_runner.go:130] >       ],
	I1119 21:57:55.685346  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.685357  125655 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1119 21:57:55.685370  125655 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1119 21:57:55.685378  125655 command_runner.go:130] >       ],
	I1119 21:57:55.685383  125655 command_runner.go:130] >       "size": "742092",
	I1119 21:57:55.685390  125655 command_runner.go:130] >       "uid": {
	I1119 21:57:55.685396  125655 command_runner.go:130] >         "value": "65535"
	I1119 21:57:55.685403  125655 command_runner.go:130] >       },
	I1119 21:57:55.685408  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.685414  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.685419  125655 command_runner.go:130] >       "pinned": true
	I1119 21:57:55.685427  125655 command_runner.go:130] >     }
	I1119 21:57:55.685433  125655 command_runner.go:130] >   ]
	I1119 21:57:55.685437  125655 command_runner.go:130] > }
	I1119 21:57:55.686566  125655 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 21:57:55.686586  125655 crio.go:433] Images already preloaded, skipping extraction
	I1119 21:57:55.686648  125655 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 21:57:55.724518  125655 command_runner.go:130] > {
	I1119 21:57:55.724544  125655 command_runner.go:130] >   "images": [
	I1119 21:57:55.724550  125655 command_runner.go:130] >     {
	I1119 21:57:55.724562  125655 command_runner.go:130] >       "id": "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1119 21:57:55.724570  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.724578  125655 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1119 21:57:55.724582  125655 command_runner.go:130] >       ],
	I1119 21:57:55.724587  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.724597  125655 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1119 21:57:55.724607  125655 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1119 21:57:55.724613  125655 command_runner.go:130] >       ],
	I1119 21:57:55.724619  125655 command_runner.go:130] >       "size": "109379124",
	I1119 21:57:55.724626  125655 command_runner.go:130] >       "uid": null,
	I1119 21:57:55.724632  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.724643  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.724650  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.724656  125655 command_runner.go:130] >     },
	I1119 21:57:55.724662  125655 command_runner.go:130] >     {
	I1119 21:57:55.724672  125655 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1119 21:57:55.724681  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.724688  125655 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1119 21:57:55.724694  125655 command_runner.go:130] >       ],
	I1119 21:57:55.724714  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.724730  125655 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1119 21:57:55.724751  125655 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1119 21:57:55.724760  125655 command_runner.go:130] >       ],
	I1119 21:57:55.724772  125655 command_runner.go:130] >       "size": "31470524",
	I1119 21:57:55.724779  125655 command_runner.go:130] >       "uid": null,
	I1119 21:57:55.724789  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.724795  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.724802  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.724808  125655 command_runner.go:130] >     },
	I1119 21:57:55.724814  125655 command_runner.go:130] >     {
	I1119 21:57:55.724828  125655 command_runner.go:130] >       "id": "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1119 21:57:55.724838  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.724847  125655 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1119 21:57:55.724853  125655 command_runner.go:130] >       ],
	I1119 21:57:55.724860  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.724872  125655 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1119 21:57:55.724903  125655 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1119 21:57:55.724913  125655 command_runner.go:130] >       ],
	I1119 21:57:55.724923  125655 command_runner.go:130] >       "size": "76103547",
	I1119 21:57:55.724930  125655 command_runner.go:130] >       "uid": null,
	I1119 21:57:55.724940  125655 command_runner.go:130] >       "username": "nonroot",
	I1119 21:57:55.724945  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.724950  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.724955  125655 command_runner.go:130] >     },
	I1119 21:57:55.724959  125655 command_runner.go:130] >     {
	I1119 21:57:55.724967  125655 command_runner.go:130] >       "id": "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1119 21:57:55.724974  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.724982  125655 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1119 21:57:55.724989  125655 command_runner.go:130] >       ],
	I1119 21:57:55.724996  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.725010  125655 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1119 21:57:55.725033  125655 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1119 21:57:55.725048  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725055  125655 command_runner.go:130] >       "size": "195976448",
	I1119 21:57:55.725065  125655 command_runner.go:130] >       "uid": {
	I1119 21:57:55.725071  125655 command_runner.go:130] >         "value": "0"
	I1119 21:57:55.725077  125655 command_runner.go:130] >       },
	I1119 21:57:55.725083  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.725089  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.725097  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.725103  125655 command_runner.go:130] >     },
	I1119 21:57:55.725109  125655 command_runner.go:130] >     {
	I1119 21:57:55.725120  125655 command_runner.go:130] >       "id": "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1119 21:57:55.725127  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.725136  125655 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1119 21:57:55.725142  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725149  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.725161  125655 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1119 21:57:55.725180  125655 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1119 21:57:55.725186  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725203  125655 command_runner.go:130] >       "size": "89046001",
	I1119 21:57:55.725210  125655 command_runner.go:130] >       "uid": {
	I1119 21:57:55.725218  125655 command_runner.go:130] >         "value": "0"
	I1119 21:57:55.725231  125655 command_runner.go:130] >       },
	I1119 21:57:55.725241  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.725247  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.725256  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.725260  125655 command_runner.go:130] >     },
	I1119 21:57:55.725265  125655 command_runner.go:130] >     {
	I1119 21:57:55.725277  125655 command_runner.go:130] >       "id": "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1119 21:57:55.725284  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.725295  125655 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1119 21:57:55.725301  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725308  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.725324  125655 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1119 21:57:55.725348  125655 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1119 21:57:55.725357  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725364  125655 command_runner.go:130] >       "size": "76004181",
	I1119 21:57:55.725373  125655 command_runner.go:130] >       "uid": {
	I1119 21:57:55.725379  125655 command_runner.go:130] >         "value": "0"
	I1119 21:57:55.725385  125655 command_runner.go:130] >       },
	I1119 21:57:55.725389  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.725395  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.725400  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.725403  125655 command_runner.go:130] >     },
	I1119 21:57:55.725408  125655 command_runner.go:130] >     {
	I1119 21:57:55.725415  125655 command_runner.go:130] >       "id": "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1119 21:57:55.725421  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.725431  125655 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1119 21:57:55.725437  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725443  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.725453  125655 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1119 21:57:55.725463  125655 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1119 21:57:55.725470  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725477  125655 command_runner.go:130] >       "size": "73138073",
	I1119 21:57:55.725482  125655 command_runner.go:130] >       "uid": null,
	I1119 21:57:55.725489  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.725496  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.725503  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.725509  125655 command_runner.go:130] >     },
	I1119 21:57:55.725515  125655 command_runner.go:130] >     {
	I1119 21:57:55.725525  125655 command_runner.go:130] >       "id": "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1119 21:57:55.725531  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.725539  125655 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1119 21:57:55.725545  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725551  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.725603  125655 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1119 21:57:55.725620  125655 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1119 21:57:55.725634  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725643  125655 command_runner.go:130] >       "size": "53844823",
	I1119 21:57:55.725649  125655 command_runner.go:130] >       "uid": {
	I1119 21:57:55.725655  125655 command_runner.go:130] >         "value": "0"
	I1119 21:57:55.725661  125655 command_runner.go:130] >       },
	I1119 21:57:55.725667  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.725674  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.725680  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.725685  125655 command_runner.go:130] >     },
	I1119 21:57:55.725691  125655 command_runner.go:130] >     {
	I1119 21:57:55.725704  125655 command_runner.go:130] >       "id": "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1119 21:57:55.725711  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.725721  125655 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1119 21:57:55.725727  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725733  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.725745  125655 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1119 21:57:55.725759  125655 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1119 21:57:55.725765  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725774  125655 command_runner.go:130] >       "size": "742092",
	I1119 21:57:55.725781  125655 command_runner.go:130] >       "uid": {
	I1119 21:57:55.725787  125655 command_runner.go:130] >         "value": "65535"
	I1119 21:57:55.725795  125655 command_runner.go:130] >       },
	I1119 21:57:55.725818  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.725827  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.725833  125655 command_runner.go:130] >       "pinned": true
	I1119 21:57:55.725838  125655 command_runner.go:130] >     }
	I1119 21:57:55.725844  125655 command_runner.go:130] >   ]
	I1119 21:57:55.725849  125655 command_runner.go:130] > }
	I1119 21:57:55.726172  125655 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 21:57:55.726196  125655 cache_images.go:86] Images are preloaded, skipping loading
	I1119 21:57:55.726209  125655 kubeadm.go:935] updating node { 192.168.39.56 8441 v1.34.1 crio true true} ...
	I1119 21:57:55.726334  125655 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-274272 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.56
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-274272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 21:57:55.726417  125655 ssh_runner.go:195] Run: crio config
	I1119 21:57:55.773985  125655 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1119 21:57:55.774020  125655 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1119 21:57:55.774032  125655 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1119 21:57:55.774049  125655 command_runner.go:130] > #
	I1119 21:57:55.774057  125655 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1119 21:57:55.774064  125655 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1119 21:57:55.774073  125655 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1119 21:57:55.774083  125655 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1119 21:57:55.774088  125655 command_runner.go:130] > # reload'.
	I1119 21:57:55.774100  125655 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1119 21:57:55.774114  125655 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1119 21:57:55.774123  125655 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1119 21:57:55.774134  125655 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1119 21:57:55.774140  125655 command_runner.go:130] > [crio]
	I1119 21:57:55.774153  125655 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1119 21:57:55.774158  125655 command_runner.go:130] > # containers images, in this directory.
	I1119 21:57:55.774167  125655 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1119 21:57:55.774185  125655 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1119 21:57:55.774195  125655 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1119 21:57:55.774215  125655 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1119 21:57:55.774225  125655 command_runner.go:130] > # imagestore = ""
	I1119 21:57:55.774235  125655 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1119 21:57:55.774244  125655 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1119 21:57:55.774249  125655 command_runner.go:130] > # storage_driver = "overlay"
	I1119 21:57:55.774256  125655 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1119 21:57:55.774266  125655 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1119 21:57:55.774272  125655 command_runner.go:130] > storage_option = [
	I1119 21:57:55.774283  125655 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1119 21:57:55.774288  125655 command_runner.go:130] > ]
	I1119 21:57:55.774298  125655 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1119 21:57:55.774311  125655 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1119 21:57:55.774319  125655 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1119 21:57:55.774328  125655 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1119 21:57:55.774340  125655 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1119 21:57:55.774346  125655 command_runner.go:130] > # always happen on a node reboot
	I1119 21:57:55.774354  125655 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1119 21:57:55.774377  125655 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1119 21:57:55.774390  125655 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1119 21:57:55.774398  125655 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1119 21:57:55.774409  125655 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1119 21:57:55.774421  125655 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1119 21:57:55.774436  125655 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1119 21:57:55.774442  125655 command_runner.go:130] > # internal_wipe = true
	I1119 21:57:55.774455  125655 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1119 21:57:55.774462  125655 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1119 21:57:55.774469  125655 command_runner.go:130] > # internal_repair = false
	I1119 21:57:55.774476  125655 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1119 21:57:55.774486  125655 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1119 21:57:55.774494  125655 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1119 21:57:55.774508  125655 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1119 21:57:55.774516  125655 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1119 21:57:55.774529  125655 command_runner.go:130] > [crio.api]
	I1119 21:57:55.774537  125655 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1119 21:57:55.774545  125655 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1119 21:57:55.774555  125655 command_runner.go:130] > # IP address on which the stream server will listen.
	I1119 21:57:55.774560  125655 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1119 21:57:55.774574  125655 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1119 21:57:55.774583  125655 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1119 21:57:55.774589  125655 command_runner.go:130] > # stream_port = "0"
	I1119 21:57:55.774598  125655 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1119 21:57:55.774609  125655 command_runner.go:130] > # stream_enable_tls = false
	I1119 21:57:55.774617  125655 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1119 21:57:55.774621  125655 command_runner.go:130] > # stream_idle_timeout = ""
	I1119 21:57:55.774630  125655 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1119 21:57:55.774635  125655 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1119 21:57:55.774639  125655 command_runner.go:130] > # minutes.
	I1119 21:57:55.774643  125655 command_runner.go:130] > # stream_tls_cert = ""
	I1119 21:57:55.774648  125655 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1119 21:57:55.774656  125655 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1119 21:57:55.774665  125655 command_runner.go:130] > # stream_tls_key = ""
	I1119 21:57:55.774673  125655 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1119 21:57:55.774680  125655 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1119 21:57:55.774706  125655 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1119 21:57:55.774716  125655 command_runner.go:130] > # stream_tls_ca = ""
	I1119 21:57:55.774726  125655 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1119 21:57:55.774734  125655 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1119 21:57:55.774745  125655 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1119 21:57:55.774755  125655 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1119 21:57:55.774765  125655 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1119 21:57:55.774777  125655 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1119 21:57:55.774782  125655 command_runner.go:130] > [crio.runtime]
	I1119 21:57:55.774791  125655 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1119 21:57:55.774804  125655 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1119 21:57:55.774810  125655 command_runner.go:130] > # "nofile=1024:2048"
	I1119 21:57:55.774827  125655 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1119 21:57:55.774834  125655 command_runner.go:130] > # default_ulimits = [
	I1119 21:57:55.774841  125655 command_runner.go:130] > # ]
	I1119 21:57:55.774850  125655 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1119 21:57:55.774857  125655 command_runner.go:130] > # no_pivot = false
	I1119 21:57:55.774866  125655 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1119 21:57:55.774891  125655 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1119 21:57:55.774899  125655 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1119 21:57:55.774910  125655 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1119 21:57:55.774918  125655 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1119 21:57:55.774932  125655 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1119 21:57:55.774939  125655 command_runner.go:130] > conmon = "/usr/bin/conmon"
	I1119 21:57:55.774947  125655 command_runner.go:130] > # Cgroup setting for conmon
	I1119 21:57:55.774955  125655 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1119 21:57:55.774964  125655 command_runner.go:130] > conmon_cgroup = "pod"
	I1119 21:57:55.774974  125655 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1119 21:57:55.774985  125655 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1119 21:57:55.774996  125655 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1119 21:57:55.775012  125655 command_runner.go:130] > conmon_env = [
	I1119 21:57:55.775026  125655 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1119 21:57:55.775031  125655 command_runner.go:130] > ]
	I1119 21:57:55.775043  125655 command_runner.go:130] > # Additional environment variables to set for all the
	I1119 21:57:55.775051  125655 command_runner.go:130] > # containers. These are overridden if set in the
	I1119 21:57:55.775061  125655 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1119 21:57:55.775067  125655 command_runner.go:130] > # default_env = [
	I1119 21:57:55.775073  125655 command_runner.go:130] > # ]
	I1119 21:57:55.775081  125655 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1119 21:57:55.775095  125655 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1119 21:57:55.775101  125655 command_runner.go:130] > # selinux = false
	I1119 21:57:55.775112  125655 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1119 21:57:55.775121  125655 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1119 21:57:55.775133  125655 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1119 21:57:55.775139  125655 command_runner.go:130] > # seccomp_profile = ""
	I1119 21:57:55.775149  125655 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1119 21:57:55.775157  125655 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1119 21:57:55.775167  125655 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1119 21:57:55.775178  125655 command_runner.go:130] > # which might increase security.
	I1119 21:57:55.775185  125655 command_runner.go:130] > # This option is currently deprecated,
	I1119 21:57:55.775195  125655 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1119 21:57:55.775209  125655 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1119 21:57:55.775222  125655 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1119 21:57:55.775232  125655 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1119 21:57:55.775246  125655 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1119 21:57:55.775259  125655 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1119 21:57:55.775271  125655 command_runner.go:130] > # This option supports live configuration reload.
	I1119 21:57:55.775278  125655 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1119 21:57:55.775287  125655 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1119 21:57:55.775299  125655 command_runner.go:130] > # the cgroup blockio controller.
	I1119 21:57:55.775305  125655 command_runner.go:130] > # blockio_config_file = ""
	I1119 21:57:55.775316  125655 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1119 21:57:55.775327  125655 command_runner.go:130] > # blockio parameters.
	I1119 21:57:55.775346  125655 command_runner.go:130] > # blockio_reload = false
	I1119 21:57:55.775357  125655 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1119 21:57:55.775363  125655 command_runner.go:130] > # irqbalance daemon.
	I1119 21:57:55.775379  125655 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1119 21:57:55.775387  125655 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1119 21:57:55.775397  125655 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1119 21:57:55.775412  125655 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1119 21:57:55.775428  125655 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1119 21:57:55.775441  125655 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1119 21:57:55.775450  125655 command_runner.go:130] > # This option supports live configuration reload.
	I1119 21:57:55.775459  125655 command_runner.go:130] > # rdt_config_file = ""
	I1119 21:57:55.775465  125655 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1119 21:57:55.775470  125655 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1119 21:57:55.775549  125655 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1119 21:57:55.775561  125655 command_runner.go:130] > # separate_pull_cgroup = ""
	I1119 21:57:55.775567  125655 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1119 21:57:55.775573  125655 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1119 21:57:55.775576  125655 command_runner.go:130] > # will be added.
	I1119 21:57:55.775579  125655 command_runner.go:130] > # default_capabilities = [
	I1119 21:57:55.775584  125655 command_runner.go:130] > # 	"CHOWN",
	I1119 21:57:55.775591  125655 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1119 21:57:55.775604  125655 command_runner.go:130] > # 	"FSETID",
	I1119 21:57:55.775610  125655 command_runner.go:130] > # 	"FOWNER",
	I1119 21:57:55.775616  125655 command_runner.go:130] > # 	"SETGID",
	I1119 21:57:55.775623  125655 command_runner.go:130] > # 	"SETUID",
	I1119 21:57:55.775628  125655 command_runner.go:130] > # 	"SETPCAP",
	I1119 21:57:55.775634  125655 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1119 21:57:55.775641  125655 command_runner.go:130] > # 	"KILL",
	I1119 21:57:55.775646  125655 command_runner.go:130] > # ]
	I1119 21:57:55.775659  125655 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1119 21:57:55.775666  125655 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1119 21:57:55.775672  125655 command_runner.go:130] > # add_inheritable_capabilities = false
	I1119 21:57:55.775685  125655 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1119 21:57:55.775705  125655 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1119 21:57:55.775716  125655 command_runner.go:130] > default_sysctls = [
	I1119 21:57:55.775723  125655 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1119 21:57:55.775728  125655 command_runner.go:130] > ]
	I1119 21:57:55.775738  125655 command_runner.go:130] > # List of devices on the host that a
	I1119 21:57:55.775747  125655 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1119 21:57:55.775754  125655 command_runner.go:130] > # allowed_devices = [
	I1119 21:57:55.775759  125655 command_runner.go:130] > # 	"/dev/fuse",
	I1119 21:57:55.775766  125655 command_runner.go:130] > # ]
	I1119 21:57:55.775772  125655 command_runner.go:130] > # List of additional devices. specified as
	I1119 21:57:55.775779  125655 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1119 21:57:55.775791  125655 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1119 21:57:55.775801  125655 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1119 21:57:55.775807  125655 command_runner.go:130] > # additional_devices = [
	I1119 21:57:55.775813  125655 command_runner.go:130] > # ]
	I1119 21:57:55.775824  125655 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1119 21:57:55.775830  125655 command_runner.go:130] > # cdi_spec_dirs = [
	I1119 21:57:55.775836  125655 command_runner.go:130] > # 	"/etc/cdi",
	I1119 21:57:55.775845  125655 command_runner.go:130] > # 	"/var/run/cdi",
	I1119 21:57:55.775850  125655 command_runner.go:130] > # ]
	I1119 21:57:55.775860  125655 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1119 21:57:55.775871  125655 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1119 21:57:55.775894  125655 command_runner.go:130] > # Defaults to false.
	I1119 21:57:55.775901  125655 command_runner.go:130] > # device_ownership_from_security_context = false
	I1119 21:57:55.775919  125655 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1119 21:57:55.775932  125655 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1119 21:57:55.775938  125655 command_runner.go:130] > # hooks_dir = [
	I1119 21:57:55.775949  125655 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1119 21:57:55.775954  125655 command_runner.go:130] > # ]
	I1119 21:57:55.775967  125655 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1119 21:57:55.775976  125655 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1119 21:57:55.775986  125655 command_runner.go:130] > # its default mounts from the following two files:
	I1119 21:57:55.775990  125655 command_runner.go:130] > #
	I1119 21:57:55.776006  125655 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1119 21:57:55.776015  125655 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1119 21:57:55.776024  125655 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1119 21:57:55.776032  125655 command_runner.go:130] > #
	I1119 21:57:55.776042  125655 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1119 21:57:55.776054  125655 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1119 21:57:55.776065  125655 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1119 21:57:55.776077  125655 command_runner.go:130] > #      only add mounts it finds in this file.
	I1119 21:57:55.776082  125655 command_runner.go:130] > #
	I1119 21:57:55.776089  125655 command_runner.go:130] > # default_mounts_file = ""
	I1119 21:57:55.776099  125655 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1119 21:57:55.776105  125655 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1119 21:57:55.776113  125655 command_runner.go:130] > pids_limit = 1024
	I1119 21:57:55.776123  125655 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1119 21:57:55.776136  125655 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1119 21:57:55.776145  125655 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1119 21:57:55.776161  125655 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1119 21:57:55.776171  125655 command_runner.go:130] > # log_size_max = -1
	I1119 21:57:55.776181  125655 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1119 21:57:55.776190  125655 command_runner.go:130] > # log_to_journald = false
	I1119 21:57:55.776199  125655 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1119 21:57:55.776213  125655 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1119 21:57:55.776219  125655 command_runner.go:130] > # Path to directory for container attach sockets.
	I1119 21:57:55.776229  125655 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1119 21:57:55.776238  125655 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1119 21:57:55.776248  125655 command_runner.go:130] > # bind_mount_prefix = ""
	I1119 21:57:55.776256  125655 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1119 21:57:55.776266  125655 command_runner.go:130] > # read_only = false
	I1119 21:57:55.776275  125655 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1119 21:57:55.776287  125655 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1119 21:57:55.776293  125655 command_runner.go:130] > # live configuration reload.
	I1119 21:57:55.776299  125655 command_runner.go:130] > # log_level = "info"
	I1119 21:57:55.776309  125655 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1119 21:57:55.776330  125655 command_runner.go:130] > # This option supports live configuration reload.
	I1119 21:57:55.776339  125655 command_runner.go:130] > # log_filter = ""
	I1119 21:57:55.776349  125655 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1119 21:57:55.776364  125655 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1119 21:57:55.776372  125655 command_runner.go:130] > # separated by comma.
	I1119 21:57:55.776384  125655 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1119 21:57:55.776394  125655 command_runner.go:130] > # uid_mappings = ""
	I1119 21:57:55.776403  125655 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1119 21:57:55.776415  125655 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1119 21:57:55.776423  125655 command_runner.go:130] > # separated by comma.
	I1119 21:57:55.776433  125655 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1119 21:57:55.776443  125655 command_runner.go:130] > # gid_mappings = ""
	I1119 21:57:55.776452  125655 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1119 21:57:55.776465  125655 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1119 21:57:55.776478  125655 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1119 21:57:55.776490  125655 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1119 21:57:55.776498  125655 command_runner.go:130] > # minimum_mappable_uid = -1
	I1119 21:57:55.776507  125655 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1119 21:57:55.776520  125655 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1119 21:57:55.776530  125655 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1119 21:57:55.776540  125655 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1119 21:57:55.776548  125655 command_runner.go:130] > # minimum_mappable_gid = -1
	I1119 21:57:55.776557  125655 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1119 21:57:55.776569  125655 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1119 21:57:55.776587  125655 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1119 21:57:55.776597  125655 command_runner.go:130] > # ctr_stop_timeout = 30
	I1119 21:57:55.776607  125655 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1119 21:57:55.776619  125655 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1119 21:57:55.776626  125655 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1119 21:57:55.776637  125655 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1119 21:57:55.776642  125655 command_runner.go:130] > drop_infra_ctr = false
	I1119 21:57:55.776649  125655 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1119 21:57:55.776656  125655 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1119 21:57:55.776678  125655 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1119 21:57:55.776688  125655 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1119 21:57:55.776700  125655 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1119 21:57:55.776712  125655 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1119 21:57:55.776722  125655 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1119 21:57:55.776733  125655 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1119 21:57:55.776739  125655 command_runner.go:130] > # shared_cpuset = ""
	I1119 21:57:55.776751  125655 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1119 21:57:55.776759  125655 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1119 21:57:55.776765  125655 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1119 21:57:55.776775  125655 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1119 21:57:55.776785  125655 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1119 21:57:55.776794  125655 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1119 21:57:55.776810  125655 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1119 21:57:55.776820  125655 command_runner.go:130] > # enable_criu_support = false
	I1119 21:57:55.776828  125655 command_runner.go:130] > # Enable/disable the generation of the container,
	I1119 21:57:55.776840  125655 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1119 21:57:55.776847  125655 command_runner.go:130] > # enable_pod_events = false
	I1119 21:57:55.776856  125655 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1119 21:57:55.776862  125655 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1119 21:57:55.776870  125655 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1119 21:57:55.776886  125655 command_runner.go:130] > # default_runtime = "runc"
	I1119 21:57:55.776895  125655 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1119 21:57:55.776911  125655 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1119 21:57:55.776924  125655 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1119 21:57:55.776935  125655 command_runner.go:130] > # creation as a file is not desired either.
	I1119 21:57:55.776947  125655 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1119 21:57:55.776959  125655 command_runner.go:130] > # the hostname is being managed dynamically.
	I1119 21:57:55.776967  125655 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1119 21:57:55.776970  125655 command_runner.go:130] > # ]
	I1119 21:57:55.776979  125655 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1119 21:57:55.776993  125655 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1119 21:57:55.777006  125655 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1119 21:57:55.777024  125655 command_runner.go:130] > # Each entry in the table should follow the format:
	I1119 21:57:55.777032  125655 command_runner.go:130] > #
	I1119 21:57:55.777040  125655 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1119 21:57:55.777050  125655 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1119 21:57:55.777057  125655 command_runner.go:130] > # runtime_type = "oci"
	I1119 21:57:55.777119  125655 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1119 21:57:55.777131  125655 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1119 21:57:55.777138  125655 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1119 21:57:55.777145  125655 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1119 21:57:55.777152  125655 command_runner.go:130] > # monitor_env = []
	I1119 21:57:55.777163  125655 command_runner.go:130] > # privileged_without_host_devices = false
	I1119 21:57:55.777172  125655 command_runner.go:130] > # allowed_annotations = []
	I1119 21:57:55.777208  125655 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1119 21:57:55.777215  125655 command_runner.go:130] > # Where:
	I1119 21:57:55.777223  125655 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1119 21:57:55.777236  125655 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1119 21:57:55.777247  125655 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1119 21:57:55.777258  125655 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1119 21:57:55.777268  125655 command_runner.go:130] > #   in $PATH.
	I1119 21:57:55.777278  125655 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1119 21:57:55.777288  125655 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1119 21:57:55.777297  125655 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1119 21:57:55.777303  125655 command_runner.go:130] > #   state.
	I1119 21:57:55.777311  125655 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1119 21:57:55.777325  125655 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1119 21:57:55.777335  125655 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1119 21:57:55.777350  125655 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1119 21:57:55.777362  125655 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1119 21:57:55.777373  125655 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1119 21:57:55.777381  125655 command_runner.go:130] > #   The currently recognized values are:
	I1119 21:57:55.777388  125655 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1119 21:57:55.777402  125655 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1119 21:57:55.777414  125655 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1119 21:57:55.777431  125655 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1119 21:57:55.777446  125655 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1119 21:57:55.777459  125655 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1119 21:57:55.777470  125655 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1119 21:57:55.777478  125655 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1119 21:57:55.777484  125655 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1119 21:57:55.777499  125655 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1119 21:57:55.777510  125655 command_runner.go:130] > #   deprecated option "conmon".
	I1119 21:57:55.777521  125655 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1119 21:57:55.777532  125655 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1119 21:57:55.777543  125655 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1119 21:57:55.777553  125655 command_runner.go:130] > #   should be moved to the container's cgroup
	I1119 21:57:55.777567  125655 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1119 21:57:55.777578  125655 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1119 21:57:55.777586  125655 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1119 21:57:55.777593  125655 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1119 21:57:55.777598  125655 command_runner.go:130] > #
	I1119 21:57:55.777606  125655 command_runner.go:130] > # Using the seccomp notifier feature:
	I1119 21:57:55.777612  125655 command_runner.go:130] > #
	I1119 21:57:55.777628  125655 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1119 21:57:55.777638  125655 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1119 21:57:55.777646  125655 command_runner.go:130] > #
	I1119 21:57:55.777655  125655 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1119 21:57:55.777667  125655 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1119 21:57:55.777672  125655 command_runner.go:130] > #
	I1119 21:57:55.777682  125655 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1119 21:57:55.777688  125655 command_runner.go:130] > # feature.
	I1119 21:57:55.777693  125655 command_runner.go:130] > #
	I1119 21:57:55.777701  125655 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1119 21:57:55.777709  125655 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1119 21:57:55.777719  125655 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1119 21:57:55.777728  125655 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1119 21:57:55.777741  125655 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1119 21:57:55.777752  125655 command_runner.go:130] > #
	I1119 21:57:55.777764  125655 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1119 21:57:55.777774  125655 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1119 21:57:55.777781  125655 command_runner.go:130] > #
	I1119 21:57:55.777788  125655 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1119 21:57:55.777794  125655 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1119 21:57:55.777799  125655 command_runner.go:130] > #
	I1119 21:57:55.777804  125655 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1119 21:57:55.777810  125655 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1119 21:57:55.777814  125655 command_runner.go:130] > # limitation.
	I1119 21:57:55.777820  125655 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1119 21:57:55.777824  125655 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1119 21:57:55.777829  125655 command_runner.go:130] > runtime_type = "oci"
	I1119 21:57:55.777835  125655 command_runner.go:130] > runtime_root = "/run/runc"
	I1119 21:57:55.777839  125655 command_runner.go:130] > runtime_config_path = ""
	I1119 21:57:55.777843  125655 command_runner.go:130] > monitor_path = "/usr/bin/conmon"
	I1119 21:57:55.777847  125655 command_runner.go:130] > monitor_cgroup = "pod"
	I1119 21:57:55.777853  125655 command_runner.go:130] > monitor_exec_cgroup = ""
	I1119 21:57:55.777857  125655 command_runner.go:130] > monitor_env = [
	I1119 21:57:55.777862  125655 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1119 21:57:55.777866  125655 command_runner.go:130] > ]
	I1119 21:57:55.777870  125655 command_runner.go:130] > privileged_without_host_devices = false
	I1119 21:57:55.777885  125655 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1119 21:57:55.777890  125655 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1119 21:57:55.777898  125655 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1119 21:57:55.777905  125655 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1119 21:57:55.777915  125655 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1119 21:57:55.777923  125655 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1119 21:57:55.777936  125655 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1119 21:57:55.777946  125655 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1119 21:57:55.777952  125655 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1119 21:57:55.777959  125655 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1119 21:57:55.777963  125655 command_runner.go:130] > # Example:
	I1119 21:57:55.777976  125655 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1119 21:57:55.777983  125655 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1119 21:57:55.777987  125655 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1119 21:57:55.777992  125655 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1119 21:57:55.777995  125655 command_runner.go:130] > # cpuset = 0
	I1119 21:57:55.777999  125655 command_runner.go:130] > # cpushares = "0-1"
	I1119 21:57:55.778002  125655 command_runner.go:130] > # Where:
	I1119 21:57:55.778006  125655 command_runner.go:130] > # The workload name is workload-type.
	I1119 21:57:55.778015  125655 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1119 21:57:55.778020  125655 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1119 21:57:55.778025  125655 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1119 21:57:55.778037  125655 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1119 21:57:55.778043  125655 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1119 21:57:55.778048  125655 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1119 21:57:55.778053  125655 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1119 21:57:55.778058  125655 command_runner.go:130] > # Default value is set to true
	I1119 21:57:55.778062  125655 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1119 21:57:55.778067  125655 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1119 21:57:55.778071  125655 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1119 21:57:55.778075  125655 command_runner.go:130] > # Default value is set to 'false'
	I1119 21:57:55.778079  125655 command_runner.go:130] > # disable_hostport_mapping = false
	I1119 21:57:55.778085  125655 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1119 21:57:55.778090  125655 command_runner.go:130] > #
	I1119 21:57:55.778095  125655 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1119 21:57:55.778101  125655 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1119 21:57:55.778106  125655 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1119 21:57:55.778115  125655 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1119 21:57:55.778120  125655 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1119 21:57:55.778125  125655 command_runner.go:130] > [crio.image]
	I1119 21:57:55.778131  125655 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1119 21:57:55.778135  125655 command_runner.go:130] > # default_transport = "docker://"
	I1119 21:57:55.778140  125655 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1119 21:57:55.778146  125655 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1119 21:57:55.778154  125655 command_runner.go:130] > # global_auth_file = ""
	I1119 21:57:55.778162  125655 command_runner.go:130] > # The image used to instantiate infra containers.
	I1119 21:57:55.778166  125655 command_runner.go:130] > # This option supports live configuration reload.
	I1119 21:57:55.778171  125655 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10.1"
	I1119 21:57:55.778176  125655 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1119 21:57:55.778184  125655 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1119 21:57:55.778189  125655 command_runner.go:130] > # This option supports live configuration reload.
	I1119 21:57:55.778193  125655 command_runner.go:130] > # pause_image_auth_file = ""
	I1119 21:57:55.778201  125655 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1119 21:57:55.778210  125655 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1119 21:57:55.778216  125655 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1119 21:57:55.778221  125655 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1119 21:57:55.778226  125655 command_runner.go:130] > # pause_command = "/pause"
	I1119 21:57:55.778232  125655 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1119 21:57:55.778237  125655 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1119 21:57:55.778244  125655 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1119 21:57:55.778249  125655 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1119 21:57:55.778256  125655 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1119 21:57:55.778264  125655 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1119 21:57:55.778268  125655 command_runner.go:130] > # pinned_images = [
	I1119 21:57:55.778271  125655 command_runner.go:130] > # ]
	I1119 21:57:55.778277  125655 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1119 21:57:55.778283  125655 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1119 21:57:55.778288  125655 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1119 21:57:55.778296  125655 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1119 21:57:55.778301  125655 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1119 21:57:55.778305  125655 command_runner.go:130] > # signature_policy = ""
	I1119 21:57:55.778310  125655 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1119 21:57:55.778316  125655 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1119 21:57:55.778323  125655 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1119 21:57:55.778328  125655 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1119 21:57:55.778336  125655 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1119 21:57:55.778341  125655 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1119 21:57:55.778354  125655 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1119 21:57:55.778360  125655 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1119 21:57:55.778364  125655 command_runner.go:130] > # changing them here.
	I1119 21:57:55.778368  125655 command_runner.go:130] > # insecure_registries = [
	I1119 21:57:55.778371  125655 command_runner.go:130] > # ]
	I1119 21:57:55.778377  125655 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1119 21:57:55.778382  125655 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1119 21:57:55.778386  125655 command_runner.go:130] > # image_volumes = "mkdir"
	I1119 21:57:55.778390  125655 command_runner.go:130] > # Temporary directory to use for storing big files
	I1119 21:57:55.778396  125655 command_runner.go:130] > # big_files_temporary_dir = ""
	I1119 21:57:55.778401  125655 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1119 21:57:55.778405  125655 command_runner.go:130] > # CNI plugins.
	I1119 21:57:55.778411  125655 command_runner.go:130] > [crio.network]
	I1119 21:57:55.778416  125655 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1119 21:57:55.778421  125655 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1119 21:57:55.778427  125655 command_runner.go:130] > # cni_default_network = ""
	I1119 21:57:55.778432  125655 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1119 21:57:55.778436  125655 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1119 21:57:55.778441  125655 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1119 21:57:55.778445  125655 command_runner.go:130] > # plugin_dirs = [
	I1119 21:57:55.778448  125655 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1119 21:57:55.778450  125655 command_runner.go:130] > # ]
	I1119 21:57:55.778461  125655 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1119 21:57:55.778467  125655 command_runner.go:130] > [crio.metrics]
	I1119 21:57:55.778471  125655 command_runner.go:130] > # Globally enable or disable metrics support.
	I1119 21:57:55.778475  125655 command_runner.go:130] > enable_metrics = true
	I1119 21:57:55.778479  125655 command_runner.go:130] > # Specify enabled metrics collectors.
	I1119 21:57:55.778485  125655 command_runner.go:130] > # Per default all metrics are enabled.
	I1119 21:57:55.778491  125655 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1119 21:57:55.778496  125655 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1119 21:57:55.778502  125655 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1119 21:57:55.778506  125655 command_runner.go:130] > # metrics_collectors = [
	I1119 21:57:55.778510  125655 command_runner.go:130] > # 	"operations",
	I1119 21:57:55.778519  125655 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1119 21:57:55.778526  125655 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1119 21:57:55.778529  125655 command_runner.go:130] > # 	"operations_errors",
	I1119 21:57:55.778533  125655 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1119 21:57:55.778537  125655 command_runner.go:130] > # 	"image_pulls_by_name",
	I1119 21:57:55.778541  125655 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1119 21:57:55.778545  125655 command_runner.go:130] > # 	"image_pulls_failures",
	I1119 21:57:55.778551  125655 command_runner.go:130] > # 	"image_pulls_successes",
	I1119 21:57:55.778556  125655 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1119 21:57:55.778559  125655 command_runner.go:130] > # 	"image_layer_reuse",
	I1119 21:57:55.778563  125655 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1119 21:57:55.778567  125655 command_runner.go:130] > # 	"containers_oom_total",
	I1119 21:57:55.778570  125655 command_runner.go:130] > # 	"containers_oom",
	I1119 21:57:55.778574  125655 command_runner.go:130] > # 	"processes_defunct",
	I1119 21:57:55.778578  125655 command_runner.go:130] > # 	"operations_total",
	I1119 21:57:55.778582  125655 command_runner.go:130] > # 	"operations_latency_seconds",
	I1119 21:57:55.778588  125655 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1119 21:57:55.778592  125655 command_runner.go:130] > # 	"operations_errors_total",
	I1119 21:57:55.778596  125655 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1119 21:57:55.778600  125655 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1119 21:57:55.778604  125655 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1119 21:57:55.778611  125655 command_runner.go:130] > # 	"image_pulls_success_total",
	I1119 21:57:55.778614  125655 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1119 21:57:55.778618  125655 command_runner.go:130] > # 	"containers_oom_count_total",
	I1119 21:57:55.778625  125655 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1119 21:57:55.778629  125655 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1119 21:57:55.778632  125655 command_runner.go:130] > # ]
	I1119 21:57:55.778637  125655 command_runner.go:130] > # The port on which the metrics server will listen.
	I1119 21:57:55.778641  125655 command_runner.go:130] > # metrics_port = 9090
	I1119 21:57:55.778645  125655 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1119 21:57:55.778656  125655 command_runner.go:130] > # metrics_socket = ""
	I1119 21:57:55.778660  125655 command_runner.go:130] > # The certificate for the secure metrics server.
	I1119 21:57:55.778665  125655 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1119 21:57:55.778678  125655 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1119 21:57:55.778683  125655 command_runner.go:130] > # certificate on any modification event.
	I1119 21:57:55.778689  125655 command_runner.go:130] > # metrics_cert = ""
	I1119 21:57:55.778694  125655 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1119 21:57:55.778699  125655 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1119 21:57:55.778703  125655 command_runner.go:130] > # metrics_key = ""
	I1119 21:57:55.778708  125655 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1119 21:57:55.778714  125655 command_runner.go:130] > [crio.tracing]
	I1119 21:57:55.778719  125655 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1119 21:57:55.778722  125655 command_runner.go:130] > # enable_tracing = false
	I1119 21:57:55.778729  125655 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1119 21:57:55.778733  125655 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1119 21:57:55.778739  125655 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1119 21:57:55.778746  125655 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1119 21:57:55.778750  125655 command_runner.go:130] > # CRI-O NRI configuration.
	I1119 21:57:55.778753  125655 command_runner.go:130] > [crio.nri]
	I1119 21:57:55.778757  125655 command_runner.go:130] > # Globally enable or disable NRI.
	I1119 21:57:55.778761  125655 command_runner.go:130] > # enable_nri = false
	I1119 21:57:55.778766  125655 command_runner.go:130] > # NRI socket to listen on.
	I1119 21:57:55.778772  125655 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1119 21:57:55.778776  125655 command_runner.go:130] > # NRI plugin directory to use.
	I1119 21:57:55.778783  125655 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1119 21:57:55.778787  125655 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1119 21:57:55.778791  125655 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1119 21:57:55.778796  125655 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1119 21:57:55.778801  125655 command_runner.go:130] > # nri_disable_connections = false
	I1119 21:57:55.778805  125655 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1119 21:57:55.778809  125655 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1119 21:57:55.778814  125655 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1119 21:57:55.778818  125655 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1119 21:57:55.778823  125655 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1119 21:57:55.778826  125655 command_runner.go:130] > [crio.stats]
	I1119 21:57:55.778831  125655 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1119 21:57:55.778844  125655 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1119 21:57:55.778848  125655 command_runner.go:130] > # stats_collection_period = 0
	I1119 21:57:55.778894  125655 command_runner.go:130] ! time="2025-11-19 21:57:55.755704188Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1119 21:57:55.778909  125655 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1119 21:57:55.779016  125655 cni.go:84] Creating CNI manager for ""
	I1119 21:57:55.779032  125655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1119 21:57:55.779052  125655 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 21:57:55.779081  125655 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.56 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-274272 NodeName:functional-274272 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.56"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.56 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 21:57:55.779230  125655 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.56
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-274272"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.56"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.56"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 21:57:55.779314  125655 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 21:57:55.791865  125655 command_runner.go:130] > kubeadm
	I1119 21:57:55.791898  125655 command_runner.go:130] > kubectl
	I1119 21:57:55.791902  125655 command_runner.go:130] > kubelet
	I1119 21:57:55.792339  125655 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 21:57:55.792402  125655 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 21:57:55.804500  125655 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1119 21:57:55.831211  125655 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 21:57:55.857336  125655 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1119 21:57:55.883585  125655 ssh_runner.go:195] Run: grep 192.168.39.56	control-plane.minikube.internal$ /etc/hosts
	I1119 21:57:55.888240  125655 command_runner.go:130] > 192.168.39.56	control-plane.minikube.internal
	I1119 21:57:55.888422  125655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 21:57:56.081809  125655 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 21:57:56.102791  125655 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272 for IP: 192.168.39.56
	I1119 21:57:56.102821  125655 certs.go:195] generating shared ca certs ...
	I1119 21:57:56.102844  125655 certs.go:227] acquiring lock for ca certs: {Name:mk7fd1f1adfef6333505c39c1a982465562c820e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:57:56.103063  125655 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key
	I1119 21:57:56.103136  125655 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key
	I1119 21:57:56.103152  125655 certs.go:257] generating profile certs ...
	I1119 21:57:56.103293  125655 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/client.key
	I1119 21:57:56.103368  125655 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/apiserver.key.ff709108
	I1119 21:57:56.103443  125655 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/proxy-client.key
	I1119 21:57:56.103459  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1119 21:57:56.103484  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1119 21:57:56.103511  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1119 21:57:56.103529  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1119 21:57:56.103543  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1119 21:57:56.103561  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1119 21:57:56.103579  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1119 21:57:56.103596  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1119 21:57:56.103672  125655 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem (1338 bytes)
	W1119 21:57:56.103719  125655 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369_empty.pem, impossibly tiny 0 bytes
	I1119 21:57:56.103738  125655 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 21:57:56.103773  125655 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem (1078 bytes)
	I1119 21:57:56.103801  125655 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem (1123 bytes)
	I1119 21:57:56.103827  125655 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem (1675 bytes)
	I1119 21:57:56.103904  125655 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 21:57:56.103946  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /usr/share/ca-certificates/1213692.pem
	I1119 21:57:56.103967  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:57:56.103983  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem -> /usr/share/ca-certificates/121369.pem
	I1119 21:57:56.104844  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 21:57:56.137315  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 21:57:56.170238  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 21:57:56.201511  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 21:57:56.232500  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 21:57:56.263196  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 21:57:56.293733  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 21:57:56.325433  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 21:57:56.358184  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /usr/share/ca-certificates/1213692.pem (1708 bytes)
	I1119 21:57:56.390372  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 21:57:56.421898  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem --> /usr/share/ca-certificates/121369.pem (1338 bytes)
	I1119 21:57:56.453376  125655 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 21:57:56.475453  125655 ssh_runner.go:195] Run: openssl version
	I1119 21:57:56.482740  125655 command_runner.go:130] > OpenSSL 3.4.1 11 Feb 2025 (Library: OpenSSL 3.4.1 11 Feb 2025)
	I1119 21:57:56.482959  125655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 21:57:56.496693  125655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:57:56.502262  125655 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:57:56.502418  125655 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:57:56.502483  125655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:57:56.510342  125655 command_runner.go:130] > b5213941
	I1119 21:57:56.510469  125655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 21:57:56.522344  125655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121369.pem && ln -fs /usr/share/ca-certificates/121369.pem /etc/ssl/certs/121369.pem"
	I1119 21:57:56.536631  125655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121369.pem
	I1119 21:57:56.542029  125655 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 19 21:54 /usr/share/ca-certificates/121369.pem
	I1119 21:57:56.542209  125655 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:54 /usr/share/ca-certificates/121369.pem
	I1119 21:57:56.542274  125655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121369.pem
	I1119 21:57:56.550380  125655 command_runner.go:130] > 51391683
	I1119 21:57:56.550501  125655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/121369.pem /etc/ssl/certs/51391683.0"
	I1119 21:57:56.561821  125655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1213692.pem && ln -fs /usr/share/ca-certificates/1213692.pem /etc/ssl/certs/1213692.pem"
	I1119 21:57:56.575290  125655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1213692.pem
	I1119 21:57:56.580784  125655 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 19 21:54 /usr/share/ca-certificates/1213692.pem
	I1119 21:57:56.581086  125655 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:54 /usr/share/ca-certificates/1213692.pem
	I1119 21:57:56.581144  125655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1213692.pem
	I1119 21:57:56.588956  125655 command_runner.go:130] > 3ec20f2e
	I1119 21:57:56.589037  125655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1213692.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 21:57:56.601212  125655 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 21:57:56.606946  125655 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 21:57:56.606978  125655 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1119 21:57:56.606987  125655 command_runner.go:130] > Device: 253,1	Inode: 9430692     Links: 1
	I1119 21:57:56.606996  125655 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1119 21:57:56.607006  125655 command_runner.go:130] > Access: 2025-11-19 21:55:24.817391754 +0000
	I1119 21:57:56.607014  125655 command_runner.go:130] > Modify: 2025-11-19 21:55:24.817391754 +0000
	I1119 21:57:56.607022  125655 command_runner.go:130] > Change: 2025-11-19 21:55:24.817391754 +0000
	I1119 21:57:56.607031  125655 command_runner.go:130] >  Birth: 2025-11-19 21:55:24.817391754 +0000
	I1119 21:57:56.607101  125655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 21:57:56.614717  125655 command_runner.go:130] > Certificate will not expire
	I1119 21:57:56.614807  125655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 21:57:56.622151  125655 command_runner.go:130] > Certificate will not expire
	I1119 21:57:56.622364  125655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 21:57:56.629649  125655 command_runner.go:130] > Certificate will not expire
	I1119 21:57:56.630010  125655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 21:57:56.637598  125655 command_runner.go:130] > Certificate will not expire
	I1119 21:57:56.637675  125655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 21:57:56.645478  125655 command_runner.go:130] > Certificate will not expire
	I1119 21:57:56.645584  125655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 21:57:56.652788  125655 command_runner.go:130] > Certificate will not expire
	I1119 21:57:56.653060  125655 kubeadm.go:401] StartCluster: {Name:functional-274272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:functional-274272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:57:56.653151  125655 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:57:56.653212  125655 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:57:56.693386  125655 command_runner.go:130] > 0106e4f2ce61898cba6e2b7a948933217c923a83f47dc98f8e443135d7c1953c
	I1119 21:57:56.693418  125655 command_runner.go:130] > 27fcf5ffa9c5cce8c4adcef8f00caed64a4fab40166ca861d697c36836f01dc9
	I1119 21:57:56.693423  125655 command_runner.go:130] > 33d592c056efc2e2c713428bd1a12974c07ffee6ed0d926d5ef6cd0cca4db55d
	I1119 21:57:56.693431  125655 command_runner.go:130] > 899acc3a073d3b1e8a64e329e67a0c9a8014c3e5fb818300620298c210fd33f1
	I1119 21:57:56.693436  125655 command_runner.go:130] > cb88ba1c8d8cc6830004837e32b6220121207668f01c3560a51e2a8ce1e36ed3
	I1119 21:57:56.693441  125655 command_runner.go:130] > 077ec5fba8a3dcc009b2738e8bb85762db7ca0d2d2f4153471471dce4bb69d58
	I1119 21:57:56.693445  125655 command_runner.go:130] > f188ee072392f6539a4fc0dbc95ec1b18f76377b44c1ca821fc51f07d6c4ec6b
	I1119 21:57:56.693453  125655 command_runner.go:130] > 94bc4b1fc9ffc618caea337c906ff95370af6d466ae4d510d6d81512364b13b1
	I1119 21:57:56.693458  125655 command_runner.go:130] > 8a2841a674b031e27e3be5469070765869d325fd68f393a4e80843dbe974314f
	I1119 21:57:56.693463  125655 command_runner.go:130] > 37c3495f685326ea33fc62203369d3d57a6b53431dd8b780278885837239fdfc
	I1119 21:57:56.693468  125655 command_runner.go:130] > 931760b60afe373933492aa207a6c5231c4e48d65a493c258f05f5ff220173d5
	I1119 21:57:56.693473  125655 command_runner.go:130] > d2921c6f5f07b3d3a3ca4ccf47283bb6bb22d080b6da461bd67a7b1660db1191
	I1119 21:57:56.695025  125655 cri.go:89] found id: "0106e4f2ce61898cba6e2b7a948933217c923a83f47dc98f8e443135d7c1953c"
	I1119 21:57:56.695043  125655 cri.go:89] found id: "27fcf5ffa9c5cce8c4adcef8f00caed64a4fab40166ca861d697c36836f01dc9"
	I1119 21:57:56.695048  125655 cri.go:89] found id: "33d592c056efc2e2c713428bd1a12974c07ffee6ed0d926d5ef6cd0cca4db55d"
	I1119 21:57:56.695053  125655 cri.go:89] found id: "899acc3a073d3b1e8a64e329e67a0c9a8014c3e5fb818300620298c210fd33f1"
	I1119 21:57:56.695056  125655 cri.go:89] found id: "cb88ba1c8d8cc6830004837e32b6220121207668f01c3560a51e2a8ce1e36ed3"
	I1119 21:57:56.695061  125655 cri.go:89] found id: "077ec5fba8a3dcc009b2738e8bb85762db7ca0d2d2f4153471471dce4bb69d58"
	I1119 21:57:56.695067  125655 cri.go:89] found id: "f188ee072392f6539a4fc0dbc95ec1b18f76377b44c1ca821fc51f07d6c4ec6b"
	I1119 21:57:56.695072  125655 cri.go:89] found id: "94bc4b1fc9ffc618caea337c906ff95370af6d466ae4d510d6d81512364b13b1"
	I1119 21:57:56.695077  125655 cri.go:89] found id: "8a2841a674b031e27e3be5469070765869d325fd68f393a4e80843dbe974314f"
	I1119 21:57:56.695088  125655 cri.go:89] found id: "37c3495f685326ea33fc62203369d3d57a6b53431dd8b780278885837239fdfc"
	I1119 21:57:56.695093  125655 cri.go:89] found id: "931760b60afe373933492aa207a6c5231c4e48d65a493c258f05f5ff220173d5"
	I1119 21:57:56.695097  125655 cri.go:89] found id: "d2921c6f5f07b3d3a3ca4ccf47283bb6bb22d080b6da461bd67a7b1660db1191"
	I1119 21:57:56.695101  125655 cri.go:89] found id: ""
	I1119 21:57:56.695155  125655 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-amd64 start -p functional-274272 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 13m55.620366889s for "functional-274272" cluster.
I1119 22:10:11.087268  121369 config.go:182] Loaded profile config "functional-274272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/SoftStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-274272 -n functional-274272
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-274272 -n functional-274272: exit status 2 (207.782694ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/SoftStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-274272 logs -n 25
E1119 22:10:48.173079  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:14:25.093988  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:19:25.102646  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-274272 logs -n 25: (12m20.947036737s)
helpers_test.go:260: TestFunctional/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
	โ”‚ COMMAND โ”‚                                                         ARGS                                                          โ”‚      PROFILE      โ”‚  USER   โ”‚ VERSION โ”‚     START TIME      โ”‚      END TIME       โ”‚
	โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
	โ”‚ addons  โ”‚ addons-638975 addons disable csi-hostpath-driver --alsologtostderr -v=1                                               โ”‚ addons-638975     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:50 UTC โ”‚ 19 Nov 25 21:50 UTC โ”‚
	โ”‚ ip      โ”‚ addons-638975 ip                                                                                                      โ”‚ addons-638975     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:52 UTC โ”‚ 19 Nov 25 21:52 UTC โ”‚
	โ”‚ addons  โ”‚ addons-638975 addons disable ingress-dns --alsologtostderr -v=1                                                       โ”‚ addons-638975     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:52 UTC โ”‚ 19 Nov 25 21:52 UTC โ”‚
	โ”‚ addons  โ”‚ addons-638975 addons disable ingress --alsologtostderr -v=1                                                           โ”‚ addons-638975     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:52 UTC โ”‚ 19 Nov 25 21:52 UTC โ”‚
	โ”‚ stop    โ”‚ -p addons-638975                                                                                                      โ”‚ addons-638975     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:52 UTC โ”‚ 19 Nov 25 21:54 UTC โ”‚
	โ”‚ addons  โ”‚ enable dashboard -p addons-638975                                                                                     โ”‚ addons-638975     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚ 19 Nov 25 21:54 UTC โ”‚
	โ”‚ addons  โ”‚ disable dashboard -p addons-638975                                                                                    โ”‚ addons-638975     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚ 19 Nov 25 21:54 UTC โ”‚
	โ”‚ addons  โ”‚ disable gvisor -p addons-638975                                                                                       โ”‚ addons-638975     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚ 19 Nov 25 21:54 UTC โ”‚
	โ”‚ delete  โ”‚ -p addons-638975                                                                                                      โ”‚ addons-638975     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚ 19 Nov 25 21:54 UTC โ”‚
	โ”‚ start   โ”‚ -p nospam-527873 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-527873 --driver=kvm2  --container-runtime=crio โ”‚ nospam-527873     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚ 19 Nov 25 21:54 UTC โ”‚
	โ”‚ start   โ”‚ nospam-527873 --log_dir /tmp/nospam-527873 start --dry-run                                                            โ”‚ nospam-527873     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚                     โ”‚
	โ”‚ start   โ”‚ nospam-527873 --log_dir /tmp/nospam-527873 start --dry-run                                                            โ”‚ nospam-527873     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚                     โ”‚
	โ”‚ start   โ”‚ nospam-527873 --log_dir /tmp/nospam-527873 start --dry-run                                                            โ”‚ nospam-527873     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚                     โ”‚
	โ”‚ pause   โ”‚ nospam-527873 --log_dir /tmp/nospam-527873 pause                                                                      โ”‚ nospam-527873     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚ 19 Nov 25 21:54 UTC โ”‚
	โ”‚ pause   โ”‚ nospam-527873 --log_dir /tmp/nospam-527873 pause                                                                      โ”‚ nospam-527873     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚ 19 Nov 25 21:54 UTC โ”‚
	โ”‚ pause   โ”‚ nospam-527873 --log_dir /tmp/nospam-527873 pause                                                                      โ”‚ nospam-527873     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚ 19 Nov 25 21:54 UTC โ”‚
	โ”‚ unpause โ”‚ nospam-527873 --log_dir /tmp/nospam-527873 unpause                                                                    โ”‚ nospam-527873     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚ 19 Nov 25 21:54 UTC โ”‚
	โ”‚ unpause โ”‚ nospam-527873 --log_dir /tmp/nospam-527873 unpause                                                                    โ”‚ nospam-527873     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚ 19 Nov 25 21:54 UTC โ”‚
	โ”‚ unpause โ”‚ nospam-527873 --log_dir /tmp/nospam-527873 unpause                                                                    โ”‚ nospam-527873     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚ 19 Nov 25 21:54 UTC โ”‚
	โ”‚ stop    โ”‚ nospam-527873 --log_dir /tmp/nospam-527873 stop                                                                       โ”‚ nospam-527873     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚ 19 Nov 25 21:54 UTC โ”‚
	โ”‚ stop    โ”‚ nospam-527873 --log_dir /tmp/nospam-527873 stop                                                                       โ”‚ nospam-527873     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚ 19 Nov 25 21:54 UTC โ”‚
	โ”‚ stop    โ”‚ nospam-527873 --log_dir /tmp/nospam-527873 stop                                                                       โ”‚ nospam-527873     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚ 19 Nov 25 21:54 UTC โ”‚
	โ”‚ delete  โ”‚ -p nospam-527873                                                                                                      โ”‚ nospam-527873     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚ 19 Nov 25 21:54 UTC โ”‚
	โ”‚ start   โ”‚ -p functional-274272 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio           โ”‚ functional-274272 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚ 19 Nov 25 21:56 UTC โ”‚
	โ”‚ start   โ”‚ -p functional-274272 --alsologtostderr -v=8                                                                           โ”‚ functional-274272 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:56 UTC โ”‚                     โ”‚
	โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 21:56:15
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 21:56:15.524505  125655 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:56:15.524640  125655 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:56:15.524650  125655 out.go:374] Setting ErrFile to fd 2...
	I1119 21:56:15.524653  125655 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:56:15.524902  125655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-117497/.minikube/bin
	I1119 21:56:15.525357  125655 out.go:368] Setting JSON to false
	I1119 21:56:15.526238  125655 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":13122,"bootTime":1763576253,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 21:56:15.526344  125655 start.go:143] virtualization: kvm guest
	I1119 21:56:15.529220  125655 out.go:179] * [functional-274272] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 21:56:15.530888  125655 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 21:56:15.530892  125655 notify.go:221] Checking for updates...
	I1119 21:56:15.533320  125655 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 21:56:15.534592  125655 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 21:56:15.535896  125655 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-117497/.minikube
	I1119 21:56:15.537284  125655 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 21:56:15.538692  125655 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 21:56:15.540498  125655 config.go:182] Loaded profile config "functional-274272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:56:15.540627  125655 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 21:56:15.578138  125655 out.go:179] * Using the kvm2 driver based on existing profile
	I1119 21:56:15.579740  125655 start.go:309] selected driver: kvm2
	I1119 21:56:15.579759  125655 start.go:930] validating driver "kvm2" against &{Name:functional-274272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-274272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:56:15.579860  125655 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 21:56:15.580973  125655 cni.go:84] Creating CNI manager for ""
	I1119 21:56:15.581058  125655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1119 21:56:15.581134  125655 start.go:353] cluster config:
	{Name:functional-274272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-274272 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:56:15.581282  125655 iso.go:125] acquiring lock: {Name:mk95e5a645bfae75190ef550e02bd4f48b331040 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 21:56:15.582970  125655 out.go:179] * Starting "functional-274272" primary control-plane node in "functional-274272" cluster
	I1119 21:56:15.584343  125655 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 21:56:15.584377  125655 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 21:56:15.584386  125655 cache.go:65] Caching tarball of preloaded images
	I1119 21:56:15.584490  125655 preload.go:238] Found /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 21:56:15.584505  125655 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 21:56:15.584592  125655 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/config.json ...
	I1119 21:56:15.584871  125655 start.go:360] acquireMachinesLock for functional-274272: {Name:mk31ae761a53f3900bcf08a8c89eb1fc69e23fe3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1119 21:56:15.584938  125655 start.go:364] duration metric: took 31.116ยตs to acquireMachinesLock for "functional-274272"
	I1119 21:56:15.584961  125655 start.go:96] Skipping create...Using existing machine configuration
	I1119 21:56:15.584971  125655 fix.go:54] fixHost starting: 
	I1119 21:56:15.587127  125655 fix.go:112] recreateIfNeeded on functional-274272: state=Running err=<nil>
	W1119 21:56:15.587160  125655 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 21:56:15.589621  125655 out.go:252] * Updating the running kvm2 "functional-274272" VM ...
	I1119 21:56:15.589658  125655 machine.go:94] provisionDockerMachine start ...
	I1119 21:56:15.592549  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:15.593154  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:15.593187  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:15.593360  125655 main.go:143] libmachine: Using SSH client type: native
	I1119 21:56:15.593603  125655 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1119 21:56:15.593618  125655 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 21:56:15.702194  125655 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-274272
	
	I1119 21:56:15.702244  125655 buildroot.go:166] provisioning hostname "functional-274272"
	I1119 21:56:15.705141  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:15.705571  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:15.705614  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:15.705842  125655 main.go:143] libmachine: Using SSH client type: native
	I1119 21:56:15.706110  125655 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1119 21:56:15.706125  125655 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-274272 && echo "functional-274272" | sudo tee /etc/hostname
	I1119 21:56:15.846160  125655 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-274272
	
	I1119 21:56:15.849601  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:15.850076  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:15.850116  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:15.850306  125655 main.go:143] libmachine: Using SSH client type: native
	I1119 21:56:15.850538  125655 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1119 21:56:15.850562  125655 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-274272' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-274272/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-274272' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 21:56:15.958572  125655 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 21:56:15.958602  125655 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21918-117497/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-117497/.minikube}
	I1119 21:56:15.958621  125655 buildroot.go:174] setting up certificates
	I1119 21:56:15.958644  125655 provision.go:84] configureAuth start
	I1119 21:56:15.961541  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:15.961948  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:15.961978  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:15.964387  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:15.964833  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:15.964860  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:15.965011  125655 provision.go:143] copyHostCerts
	I1119 21:56:15.965045  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 21:56:15.965088  125655 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem, removing ...
	I1119 21:56:15.965106  125655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 21:56:15.965186  125655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem (1123 bytes)
	I1119 21:56:15.965327  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 21:56:15.965363  125655 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem, removing ...
	I1119 21:56:15.965371  125655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 21:56:15.965420  125655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem (1675 bytes)
	I1119 21:56:15.965509  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 21:56:15.965533  125655 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem, removing ...
	I1119 21:56:15.965543  125655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 21:56:15.965592  125655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem (1078 bytes)
	I1119 21:56:15.965675  125655 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem org=jenkins.functional-274272 san=[127.0.0.1 192.168.39.56 functional-274272 localhost minikube]
	I1119 21:56:16.178107  125655 provision.go:177] copyRemoteCerts
	I1119 21:56:16.178177  125655 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 21:56:16.180523  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:16.180929  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:16.180960  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:16.181094  125655 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/functional-274272/id_rsa Username:docker}
	I1119 21:56:16.267429  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1119 21:56:16.267516  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 21:56:16.303049  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1119 21:56:16.303134  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 21:56:16.336134  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1119 21:56:16.336220  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 21:56:16.369360  125655 provision.go:87] duration metric: took 410.702355ms to configureAuth
	I1119 21:56:16.369395  125655 buildroot.go:189] setting minikube options for container-runtime
	I1119 21:56:16.369609  125655 config.go:182] Loaded profile config "functional-274272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:56:16.372543  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:16.372941  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:16.372970  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:16.373148  125655 main.go:143] libmachine: Using SSH client type: native
	I1119 21:56:16.373382  125655 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1119 21:56:16.373404  125655 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 21:56:21.981912  125655 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 21:56:21.981950  125655 machine.go:97] duration metric: took 6.392282192s to provisionDockerMachine
	I1119 21:56:21.981967  125655 start.go:293] postStartSetup for "functional-274272" (driver="kvm2")
	I1119 21:56:21.981980  125655 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 21:56:21.982049  125655 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 21:56:21.985113  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:21.985484  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:21.985537  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:21.985749  125655 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/functional-274272/id_rsa Username:docker}
	I1119 21:56:22.102924  125655 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 21:56:22.114116  125655 command_runner.go:130] > NAME=Buildroot
	I1119 21:56:22.114134  125655 command_runner.go:130] > VERSION=2025.02-dirty
	I1119 21:56:22.114138  125655 command_runner.go:130] > ID=buildroot
	I1119 21:56:22.114143  125655 command_runner.go:130] > VERSION_ID=2025.02
	I1119 21:56:22.114148  125655 command_runner.go:130] > PRETTY_NAME="Buildroot 2025.02"
	I1119 21:56:22.114192  125655 info.go:137] Remote host: Buildroot 2025.02
	I1119 21:56:22.114211  125655 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/addons for local assets ...
	I1119 21:56:22.114270  125655 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/files for local assets ...
	I1119 21:56:22.114383  125655 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> 1213692.pem in /etc/ssl/certs
	I1119 21:56:22.114400  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /etc/ssl/certs/1213692.pem
	I1119 21:56:22.114498  125655 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/test/nested/copy/121369/hosts -> hosts in /etc/test/nested/copy/121369
	I1119 21:56:22.114510  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/test/nested/copy/121369/hosts -> /etc/test/nested/copy/121369/hosts
	I1119 21:56:22.114560  125655 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/121369
	I1119 21:56:22.154301  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 21:56:22.234489  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/test/nested/copy/121369/hosts --> /etc/test/nested/copy/121369/hosts (40 bytes)
	I1119 21:56:22.328918  125655 start.go:296] duration metric: took 346.928603ms for postStartSetup
	I1119 21:56:22.328975  125655 fix.go:56] duration metric: took 6.74400308s for fixHost
	I1119 21:56:22.332245  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:22.332719  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:22.332761  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:22.333032  125655 main.go:143] libmachine: Using SSH client type: native
	I1119 21:56:22.333335  125655 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1119 21:56:22.333355  125655 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1119 21:56:22.524275  125655 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763589382.515165407
	
	I1119 21:56:22.524306  125655 fix.go:216] guest clock: 1763589382.515165407
	I1119 21:56:22.524317  125655 fix.go:229] Guest: 2025-11-19 21:56:22.515165407 +0000 UTC Remote: 2025-11-19 21:56:22.328982326 +0000 UTC m=+6.856474824 (delta=186.183081ms)
	I1119 21:56:22.524340  125655 fix.go:200] guest clock delta is within tolerance: 186.183081ms
	I1119 21:56:22.524348  125655 start.go:83] releasing machines lock for "functional-274272", held for 6.939395313s
	I1119 21:56:22.527518  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:22.527977  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:22.528013  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:22.528866  125655 ssh_runner.go:195] Run: cat /version.json
	I1119 21:56:22.528919  125655 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 21:56:22.532219  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:22.532345  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:22.532671  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:22.532706  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:22.532818  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:22.532846  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:22.532915  125655 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/functional-274272/id_rsa Username:docker}
	I1119 21:56:22.533175  125655 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/functional-274272/id_rsa Username:docker}
	I1119 21:56:22.669778  125655 command_runner.go:130] > {"iso_version": "v1.37.0-1763575914-21918", "kicbase_version": "v0.0.48-1763561786-21918", "minikube_version": "v1.37.0", "commit": "425f5f15185086235ffd9f03de5624881b145800"}
	I1119 21:56:22.670044  125655 ssh_runner.go:195] Run: systemctl --version
	I1119 21:56:22.709748  125655 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1119 21:56:22.714621  125655 command_runner.go:130] > systemd 256 (256.7)
	I1119 21:56:22.714659  125655 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP -LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT -LIBARCHIVE
	I1119 21:56:22.714732  125655 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 21:56:22.954339  125655 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1119 21:56:22.974290  125655 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1119 21:56:22.977797  125655 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 21:56:22.977901  125655 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 21:56:23.008282  125655 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 21:56:23.008311  125655 start.go:496] detecting cgroup driver to use...
	I1119 21:56:23.008412  125655 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 21:56:23.105440  125655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 21:56:23.135903  125655 docker.go:218] disabling cri-docker service (if available) ...
	I1119 21:56:23.135971  125655 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 21:56:23.175334  125655 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 21:56:23.257801  125655 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 21:56:23.591204  125655 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 21:56:23.882315  125655 docker.go:234] disabling docker service ...
	I1119 21:56:23.882405  125655 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 21:56:23.921893  125655 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 21:56:23.944430  125655 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 21:56:24.230558  125655 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 21:56:24.528514  125655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 21:56:24.549079  125655 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 21:56:24.594053  125655 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1119 21:56:24.595417  125655 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 21:56:24.595501  125655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:56:24.619383  125655 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 21:56:24.619478  125655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:56:24.644219  125655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:56:24.664023  125655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:56:24.686767  125655 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 21:56:24.708545  125655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:56:24.732834  125655 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:56:24.757166  125655 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:56:24.777845  125655 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 21:56:24.796276  125655 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1119 21:56:24.796965  125655 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 21:56:24.817150  125655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 21:56:25.056155  125655 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 21:57:55.500242  125655 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.444033021s)
	I1119 21:57:55.500288  125655 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 21:57:55.500356  125655 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 21:57:55.507439  125655 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1119 21:57:55.507472  125655 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1119 21:57:55.507489  125655 command_runner.go:130] > Device: 0,23	Inode: 1960        Links: 1
	I1119 21:57:55.507496  125655 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1119 21:57:55.507501  125655 command_runner.go:130] > Access: 2025-11-19 21:57:55.296187903 +0000
	I1119 21:57:55.507518  125655 command_runner.go:130] > Modify: 2025-11-19 21:57:55.296187903 +0000
	I1119 21:57:55.507523  125655 command_runner.go:130] > Change: 2025-11-19 21:57:55.296187903 +0000
	I1119 21:57:55.507528  125655 command_runner.go:130] >  Birth: 2025-11-19 21:57:55.296187903 +0000
	I1119 21:57:55.507551  125655 start.go:564] Will wait 60s for crictl version
	I1119 21:57:55.507616  125655 ssh_runner.go:195] Run: which crictl
	I1119 21:57:55.512454  125655 command_runner.go:130] > /usr/bin/crictl
	I1119 21:57:55.512630  125655 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1119 21:57:55.557330  125655 command_runner.go:130] > Version:  0.1.0
	I1119 21:57:55.557354  125655 command_runner.go:130] > RuntimeName:  cri-o
	I1119 21:57:55.557359  125655 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1119 21:57:55.557366  125655 command_runner.go:130] > RuntimeApiVersion:  v1
	I1119 21:57:55.557387  125655 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1119 21:57:55.557484  125655 ssh_runner.go:195] Run: crio --version
	I1119 21:57:55.589692  125655 command_runner.go:130] > crio version 1.29.1
	I1119 21:57:55.589714  125655 command_runner.go:130] > Version:        1.29.1
	I1119 21:57:55.589733  125655 command_runner.go:130] > GitCommit:      unknown
	I1119 21:57:55.589738  125655 command_runner.go:130] > GitCommitDate:  unknown
	I1119 21:57:55.589742  125655 command_runner.go:130] > GitTreeState:   clean
	I1119 21:57:55.589748  125655 command_runner.go:130] > BuildDate:      2025-11-19T21:18:08Z
	I1119 21:57:55.589752  125655 command_runner.go:130] > GoVersion:      go1.23.4
	I1119 21:57:55.589755  125655 command_runner.go:130] > Compiler:       gc
	I1119 21:57:55.589760  125655 command_runner.go:130] > Platform:       linux/amd64
	I1119 21:57:55.589763  125655 command_runner.go:130] > Linkmode:       dynamic
	I1119 21:57:55.589779  125655 command_runner.go:130] > BuildTags:      
	I1119 21:57:55.589785  125655 command_runner.go:130] >   containers_image_ostree_stub
	I1119 21:57:55.589789  125655 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1119 21:57:55.589793  125655 command_runner.go:130] >   btrfs_noversion
	I1119 21:57:55.589798  125655 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1119 21:57:55.589802  125655 command_runner.go:130] >   libdm_no_deferred_remove
	I1119 21:57:55.589809  125655 command_runner.go:130] >   seccomp
	I1119 21:57:55.589813  125655 command_runner.go:130] > LDFlags:          unknown
	I1119 21:57:55.589817  125655 command_runner.go:130] > SeccompEnabled:   true
	I1119 21:57:55.589824  125655 command_runner.go:130] > AppArmorEnabled:  false
	I1119 21:57:55.590824  125655 ssh_runner.go:195] Run: crio --version
	I1119 21:57:55.623734  125655 command_runner.go:130] > crio version 1.29.1
	I1119 21:57:55.623758  125655 command_runner.go:130] > Version:        1.29.1
	I1119 21:57:55.623767  125655 command_runner.go:130] > GitCommit:      unknown
	I1119 21:57:55.623773  125655 command_runner.go:130] > GitCommitDate:  unknown
	I1119 21:57:55.623778  125655 command_runner.go:130] > GitTreeState:   clean
	I1119 21:57:55.623785  125655 command_runner.go:130] > BuildDate:      2025-11-19T21:18:08Z
	I1119 21:57:55.623791  125655 command_runner.go:130] > GoVersion:      go1.23.4
	I1119 21:57:55.623797  125655 command_runner.go:130] > Compiler:       gc
	I1119 21:57:55.623803  125655 command_runner.go:130] > Platform:       linux/amd64
	I1119 21:57:55.623808  125655 command_runner.go:130] > Linkmode:       dynamic
	I1119 21:57:55.623815  125655 command_runner.go:130] > BuildTags:      
	I1119 21:57:55.623822  125655 command_runner.go:130] >   containers_image_ostree_stub
	I1119 21:57:55.623832  125655 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1119 21:57:55.623838  125655 command_runner.go:130] >   btrfs_noversion
	I1119 21:57:55.623847  125655 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1119 21:57:55.623868  125655 command_runner.go:130] >   libdm_no_deferred_remove
	I1119 21:57:55.623897  125655 command_runner.go:130] >   seccomp
	I1119 21:57:55.623907  125655 command_runner.go:130] > LDFlags:          unknown
	I1119 21:57:55.623914  125655 command_runner.go:130] > SeccompEnabled:   true
	I1119 21:57:55.623922  125655 command_runner.go:130] > AppArmorEnabled:  false
	I1119 21:57:55.626580  125655 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1119 21:57:55.630696  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:57:55.631264  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:57:55.631302  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:57:55.631528  125655 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1119 21:57:55.636396  125655 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1119 21:57:55.636493  125655 kubeadm.go:884] updating cluster {Name:functional-274272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:functional-274272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 21:57:55.636629  125655 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 21:57:55.636691  125655 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 21:57:55.684104  125655 command_runner.go:130] > {
	I1119 21:57:55.684131  125655 command_runner.go:130] >   "images": [
	I1119 21:57:55.684137  125655 command_runner.go:130] >     {
	I1119 21:57:55.684148  125655 command_runner.go:130] >       "id": "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1119 21:57:55.684155  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.684165  125655 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1119 21:57:55.684170  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684176  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.684188  125655 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1119 21:57:55.684199  125655 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1119 21:57:55.684209  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684215  125655 command_runner.go:130] >       "size": "109379124",
	I1119 21:57:55.684222  125655 command_runner.go:130] >       "uid": null,
	I1119 21:57:55.684231  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.684259  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.684270  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.684275  125655 command_runner.go:130] >     },
	I1119 21:57:55.684280  125655 command_runner.go:130] >     {
	I1119 21:57:55.684290  125655 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1119 21:57:55.684299  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.684308  125655 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1119 21:57:55.684317  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684323  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.684344  125655 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1119 21:57:55.684360  125655 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1119 21:57:55.684366  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684377  125655 command_runner.go:130] >       "size": "31470524",
	I1119 21:57:55.684385  125655 command_runner.go:130] >       "uid": null,
	I1119 21:57:55.684392  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.684399  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.684407  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.684412  125655 command_runner.go:130] >     },
	I1119 21:57:55.684421  125655 command_runner.go:130] >     {
	I1119 21:57:55.684430  125655 command_runner.go:130] >       "id": "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1119 21:57:55.684439  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.684447  125655 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1119 21:57:55.684457  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684463  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.684478  125655 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1119 21:57:55.684492  125655 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1119 21:57:55.684501  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684507  125655 command_runner.go:130] >       "size": "76103547",
	I1119 21:57:55.684514  125655 command_runner.go:130] >       "uid": null,
	I1119 21:57:55.684521  125655 command_runner.go:130] >       "username": "nonroot",
	I1119 21:57:55.684530  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.684536  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.684544  125655 command_runner.go:130] >     },
	I1119 21:57:55.684551  125655 command_runner.go:130] >     {
	I1119 21:57:55.684561  125655 command_runner.go:130] >       "id": "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1119 21:57:55.684567  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.684578  125655 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1119 21:57:55.684584  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684591  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.684603  125655 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1119 21:57:55.684630  125655 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1119 21:57:55.684645  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684659  125655 command_runner.go:130] >       "size": "195976448",
	I1119 21:57:55.684669  125655 command_runner.go:130] >       "uid": {
	I1119 21:57:55.684675  125655 command_runner.go:130] >         "value": "0"
	I1119 21:57:55.684681  125655 command_runner.go:130] >       },
	I1119 21:57:55.684687  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.684693  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.684699  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.684704  125655 command_runner.go:130] >     },
	I1119 21:57:55.684719  125655 command_runner.go:130] >     {
	I1119 21:57:55.684731  125655 command_runner.go:130] >       "id": "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1119 21:57:55.684738  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.684745  125655 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1119 21:57:55.684753  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684759  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.684771  125655 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1119 21:57:55.684783  125655 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1119 21:57:55.684791  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684796  125655 command_runner.go:130] >       "size": "89046001",
	I1119 21:57:55.684802  125655 command_runner.go:130] >       "uid": {
	I1119 21:57:55.684808  125655 command_runner.go:130] >         "value": "0"
	I1119 21:57:55.684817  125655 command_runner.go:130] >       },
	I1119 21:57:55.684822  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.684830  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.684836  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.684844  125655 command_runner.go:130] >     },
	I1119 21:57:55.684849  125655 command_runner.go:130] >     {
	I1119 21:57:55.684860  125655 command_runner.go:130] >       "id": "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1119 21:57:55.684866  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.684898  125655 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1119 21:57:55.684908  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684914  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.684927  125655 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1119 21:57:55.684940  125655 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1119 21:57:55.684953  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684962  125655 command_runner.go:130] >       "size": "76004181",
	I1119 21:57:55.684968  125655 command_runner.go:130] >       "uid": {
	I1119 21:57:55.684976  125655 command_runner.go:130] >         "value": "0"
	I1119 21:57:55.684981  125655 command_runner.go:130] >       },
	I1119 21:57:55.684990  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.684995  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.685004  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.685009  125655 command_runner.go:130] >     },
	I1119 21:57:55.685015  125655 command_runner.go:130] >     {
	I1119 21:57:55.685025  125655 command_runner.go:130] >       "id": "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1119 21:57:55.685034  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.685041  125655 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1119 21:57:55.685049  125655 command_runner.go:130] >       ],
	I1119 21:57:55.685055  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.685069  125655 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1119 21:57:55.685081  125655 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1119 21:57:55.685090  125655 command_runner.go:130] >       ],
	I1119 21:57:55.685095  125655 command_runner.go:130] >       "size": "73138073",
	I1119 21:57:55.685104  125655 command_runner.go:130] >       "uid": null,
	I1119 21:57:55.685110  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.685119  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.685125  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.685134  125655 command_runner.go:130] >     },
	I1119 21:57:55.685140  125655 command_runner.go:130] >     {
	I1119 21:57:55.685151  125655 command_runner.go:130] >       "id": "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1119 21:57:55.685158  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.685166  125655 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1119 21:57:55.685174  125655 command_runner.go:130] >       ],
	I1119 21:57:55.685181  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.685213  125655 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1119 21:57:55.685226  125655 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1119 21:57:55.685240  125655 command_runner.go:130] >       ],
	I1119 21:57:55.685255  125655 command_runner.go:130] >       "size": "53844823",
	I1119 21:57:55.685264  125655 command_runner.go:130] >       "uid": {
	I1119 21:57:55.685270  125655 command_runner.go:130] >         "value": "0"
	I1119 21:57:55.685278  125655 command_runner.go:130] >       },
	I1119 21:57:55.685285  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.685294  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.685299  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.685307  125655 command_runner.go:130] >     },
	I1119 21:57:55.685311  125655 command_runner.go:130] >     {
	I1119 21:57:55.685322  125655 command_runner.go:130] >       "id": "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1119 21:57:55.685327  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.685334  125655 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1119 21:57:55.685339  125655 command_runner.go:130] >       ],
	I1119 21:57:55.685346  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.685357  125655 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1119 21:57:55.685370  125655 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1119 21:57:55.685378  125655 command_runner.go:130] >       ],
	I1119 21:57:55.685383  125655 command_runner.go:130] >       "size": "742092",
	I1119 21:57:55.685390  125655 command_runner.go:130] >       "uid": {
	I1119 21:57:55.685396  125655 command_runner.go:130] >         "value": "65535"
	I1119 21:57:55.685403  125655 command_runner.go:130] >       },
	I1119 21:57:55.685408  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.685414  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.685419  125655 command_runner.go:130] >       "pinned": true
	I1119 21:57:55.685427  125655 command_runner.go:130] >     }
	I1119 21:57:55.685433  125655 command_runner.go:130] >   ]
	I1119 21:57:55.685437  125655 command_runner.go:130] > }
	I1119 21:57:55.686566  125655 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 21:57:55.686586  125655 crio.go:433] Images already preloaded, skipping extraction
	I1119 21:57:55.686648  125655 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 21:57:55.724518  125655 command_runner.go:130] > {
	I1119 21:57:55.724544  125655 command_runner.go:130] >   "images": [
	I1119 21:57:55.724550  125655 command_runner.go:130] >     {
	I1119 21:57:55.724562  125655 command_runner.go:130] >       "id": "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1119 21:57:55.724570  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.724578  125655 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1119 21:57:55.724582  125655 command_runner.go:130] >       ],
	I1119 21:57:55.724587  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.724597  125655 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1119 21:57:55.724607  125655 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1119 21:57:55.724613  125655 command_runner.go:130] >       ],
	I1119 21:57:55.724619  125655 command_runner.go:130] >       "size": "109379124",
	I1119 21:57:55.724626  125655 command_runner.go:130] >       "uid": null,
	I1119 21:57:55.724632  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.724643  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.724650  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.724656  125655 command_runner.go:130] >     },
	I1119 21:57:55.724662  125655 command_runner.go:130] >     {
	I1119 21:57:55.724672  125655 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1119 21:57:55.724681  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.724688  125655 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1119 21:57:55.724694  125655 command_runner.go:130] >       ],
	I1119 21:57:55.724714  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.724730  125655 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1119 21:57:55.724751  125655 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1119 21:57:55.724760  125655 command_runner.go:130] >       ],
	I1119 21:57:55.724772  125655 command_runner.go:130] >       "size": "31470524",
	I1119 21:57:55.724779  125655 command_runner.go:130] >       "uid": null,
	I1119 21:57:55.724789  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.724795  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.724802  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.724808  125655 command_runner.go:130] >     },
	I1119 21:57:55.724814  125655 command_runner.go:130] >     {
	I1119 21:57:55.724828  125655 command_runner.go:130] >       "id": "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1119 21:57:55.724838  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.724847  125655 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1119 21:57:55.724853  125655 command_runner.go:130] >       ],
	I1119 21:57:55.724860  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.724872  125655 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1119 21:57:55.724903  125655 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1119 21:57:55.724913  125655 command_runner.go:130] >       ],
	I1119 21:57:55.724923  125655 command_runner.go:130] >       "size": "76103547",
	I1119 21:57:55.724930  125655 command_runner.go:130] >       "uid": null,
	I1119 21:57:55.724940  125655 command_runner.go:130] >       "username": "nonroot",
	I1119 21:57:55.724945  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.724950  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.724955  125655 command_runner.go:130] >     },
	I1119 21:57:55.724959  125655 command_runner.go:130] >     {
	I1119 21:57:55.724967  125655 command_runner.go:130] >       "id": "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1119 21:57:55.724974  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.724982  125655 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1119 21:57:55.724989  125655 command_runner.go:130] >       ],
	I1119 21:57:55.724996  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.725010  125655 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1119 21:57:55.725033  125655 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1119 21:57:55.725048  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725055  125655 command_runner.go:130] >       "size": "195976448",
	I1119 21:57:55.725065  125655 command_runner.go:130] >       "uid": {
	I1119 21:57:55.725071  125655 command_runner.go:130] >         "value": "0"
	I1119 21:57:55.725077  125655 command_runner.go:130] >       },
	I1119 21:57:55.725083  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.725089  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.725097  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.725103  125655 command_runner.go:130] >     },
	I1119 21:57:55.725109  125655 command_runner.go:130] >     {
	I1119 21:57:55.725120  125655 command_runner.go:130] >       "id": "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1119 21:57:55.725127  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.725136  125655 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1119 21:57:55.725142  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725149  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.725161  125655 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1119 21:57:55.725180  125655 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1119 21:57:55.725186  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725203  125655 command_runner.go:130] >       "size": "89046001",
	I1119 21:57:55.725210  125655 command_runner.go:130] >       "uid": {
	I1119 21:57:55.725218  125655 command_runner.go:130] >         "value": "0"
	I1119 21:57:55.725231  125655 command_runner.go:130] >       },
	I1119 21:57:55.725241  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.725247  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.725256  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.725260  125655 command_runner.go:130] >     },
	I1119 21:57:55.725265  125655 command_runner.go:130] >     {
	I1119 21:57:55.725277  125655 command_runner.go:130] >       "id": "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1119 21:57:55.725284  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.725295  125655 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1119 21:57:55.725301  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725308  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.725324  125655 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1119 21:57:55.725348  125655 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1119 21:57:55.725357  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725364  125655 command_runner.go:130] >       "size": "76004181",
	I1119 21:57:55.725373  125655 command_runner.go:130] >       "uid": {
	I1119 21:57:55.725379  125655 command_runner.go:130] >         "value": "0"
	I1119 21:57:55.725385  125655 command_runner.go:130] >       },
	I1119 21:57:55.725389  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.725395  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.725400  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.725403  125655 command_runner.go:130] >     },
	I1119 21:57:55.725408  125655 command_runner.go:130] >     {
	I1119 21:57:55.725415  125655 command_runner.go:130] >       "id": "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1119 21:57:55.725421  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.725431  125655 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1119 21:57:55.725437  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725443  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.725453  125655 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1119 21:57:55.725463  125655 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1119 21:57:55.725470  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725477  125655 command_runner.go:130] >       "size": "73138073",
	I1119 21:57:55.725482  125655 command_runner.go:130] >       "uid": null,
	I1119 21:57:55.725489  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.725496  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.725503  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.725509  125655 command_runner.go:130] >     },
	I1119 21:57:55.725515  125655 command_runner.go:130] >     {
	I1119 21:57:55.725525  125655 command_runner.go:130] >       "id": "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1119 21:57:55.725531  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.725539  125655 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1119 21:57:55.725545  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725551  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.725603  125655 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1119 21:57:55.725620  125655 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1119 21:57:55.725634  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725643  125655 command_runner.go:130] >       "size": "53844823",
	I1119 21:57:55.725649  125655 command_runner.go:130] >       "uid": {
	I1119 21:57:55.725655  125655 command_runner.go:130] >         "value": "0"
	I1119 21:57:55.725661  125655 command_runner.go:130] >       },
	I1119 21:57:55.725667  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.725674  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.725680  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.725685  125655 command_runner.go:130] >     },
	I1119 21:57:55.725691  125655 command_runner.go:130] >     {
	I1119 21:57:55.725704  125655 command_runner.go:130] >       "id": "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1119 21:57:55.725711  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.725721  125655 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1119 21:57:55.725727  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725733  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.725745  125655 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1119 21:57:55.725759  125655 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1119 21:57:55.725765  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725774  125655 command_runner.go:130] >       "size": "742092",
	I1119 21:57:55.725781  125655 command_runner.go:130] >       "uid": {
	I1119 21:57:55.725787  125655 command_runner.go:130] >         "value": "65535"
	I1119 21:57:55.725795  125655 command_runner.go:130] >       },
	I1119 21:57:55.725818  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.725827  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.725833  125655 command_runner.go:130] >       "pinned": true
	I1119 21:57:55.725838  125655 command_runner.go:130] >     }
	I1119 21:57:55.725844  125655 command_runner.go:130] >   ]
	I1119 21:57:55.725849  125655 command_runner.go:130] > }
	I1119 21:57:55.726172  125655 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 21:57:55.726196  125655 cache_images.go:86] Images are preloaded, skipping loading
	I1119 21:57:55.726209  125655 kubeadm.go:935] updating node { 192.168.39.56 8441 v1.34.1 crio true true} ...
	I1119 21:57:55.726334  125655 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-274272 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.56
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-274272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 21:57:55.726417  125655 ssh_runner.go:195] Run: crio config
	I1119 21:57:55.773985  125655 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1119 21:57:55.774020  125655 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1119 21:57:55.774032  125655 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1119 21:57:55.774049  125655 command_runner.go:130] > #
	I1119 21:57:55.774057  125655 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1119 21:57:55.774064  125655 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1119 21:57:55.774073  125655 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1119 21:57:55.774083  125655 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1119 21:57:55.774088  125655 command_runner.go:130] > # reload'.
	I1119 21:57:55.774100  125655 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1119 21:57:55.774114  125655 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1119 21:57:55.774123  125655 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1119 21:57:55.774134  125655 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1119 21:57:55.774140  125655 command_runner.go:130] > [crio]
	I1119 21:57:55.774153  125655 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1119 21:57:55.774158  125655 command_runner.go:130] > # containers images, in this directory.
	I1119 21:57:55.774167  125655 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1119 21:57:55.774185  125655 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1119 21:57:55.774195  125655 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1119 21:57:55.774215  125655 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1119 21:57:55.774225  125655 command_runner.go:130] > # imagestore = ""
	I1119 21:57:55.774235  125655 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1119 21:57:55.774244  125655 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1119 21:57:55.774249  125655 command_runner.go:130] > # storage_driver = "overlay"
	I1119 21:57:55.774256  125655 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1119 21:57:55.774266  125655 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1119 21:57:55.774272  125655 command_runner.go:130] > storage_option = [
	I1119 21:57:55.774283  125655 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1119 21:57:55.774288  125655 command_runner.go:130] > ]
	I1119 21:57:55.774298  125655 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1119 21:57:55.774311  125655 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1119 21:57:55.774319  125655 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1119 21:57:55.774328  125655 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1119 21:57:55.774340  125655 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1119 21:57:55.774346  125655 command_runner.go:130] > # always happen on a node reboot
	I1119 21:57:55.774354  125655 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1119 21:57:55.774377  125655 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1119 21:57:55.774390  125655 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1119 21:57:55.774398  125655 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1119 21:57:55.774409  125655 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1119 21:57:55.774421  125655 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1119 21:57:55.774436  125655 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1119 21:57:55.774442  125655 command_runner.go:130] > # internal_wipe = true
	I1119 21:57:55.774455  125655 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1119 21:57:55.774462  125655 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1119 21:57:55.774469  125655 command_runner.go:130] > # internal_repair = false
	I1119 21:57:55.774476  125655 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1119 21:57:55.774486  125655 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1119 21:57:55.774494  125655 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1119 21:57:55.774508  125655 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1119 21:57:55.774516  125655 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1119 21:57:55.774529  125655 command_runner.go:130] > [crio.api]
	I1119 21:57:55.774537  125655 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1119 21:57:55.774545  125655 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1119 21:57:55.774555  125655 command_runner.go:130] > # IP address on which the stream server will listen.
	I1119 21:57:55.774560  125655 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1119 21:57:55.774574  125655 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1119 21:57:55.774583  125655 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1119 21:57:55.774589  125655 command_runner.go:130] > # stream_port = "0"
	I1119 21:57:55.774598  125655 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1119 21:57:55.774609  125655 command_runner.go:130] > # stream_enable_tls = false
	I1119 21:57:55.774617  125655 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1119 21:57:55.774621  125655 command_runner.go:130] > # stream_idle_timeout = ""
	I1119 21:57:55.774630  125655 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1119 21:57:55.774635  125655 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1119 21:57:55.774639  125655 command_runner.go:130] > # minutes.
	I1119 21:57:55.774643  125655 command_runner.go:130] > # stream_tls_cert = ""
	I1119 21:57:55.774648  125655 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1119 21:57:55.774656  125655 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1119 21:57:55.774665  125655 command_runner.go:130] > # stream_tls_key = ""
	I1119 21:57:55.774673  125655 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1119 21:57:55.774680  125655 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1119 21:57:55.774706  125655 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1119 21:57:55.774716  125655 command_runner.go:130] > # stream_tls_ca = ""
	I1119 21:57:55.774726  125655 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1119 21:57:55.774734  125655 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1119 21:57:55.774745  125655 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1119 21:57:55.774755  125655 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1119 21:57:55.774765  125655 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1119 21:57:55.774777  125655 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1119 21:57:55.774782  125655 command_runner.go:130] > [crio.runtime]
	I1119 21:57:55.774791  125655 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1119 21:57:55.774804  125655 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1119 21:57:55.774810  125655 command_runner.go:130] > # "nofile=1024:2048"
	I1119 21:57:55.774827  125655 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1119 21:57:55.774834  125655 command_runner.go:130] > # default_ulimits = [
	I1119 21:57:55.774841  125655 command_runner.go:130] > # ]
	I1119 21:57:55.774850  125655 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1119 21:57:55.774857  125655 command_runner.go:130] > # no_pivot = false
	I1119 21:57:55.774866  125655 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1119 21:57:55.774891  125655 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1119 21:57:55.774899  125655 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1119 21:57:55.774910  125655 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1119 21:57:55.774918  125655 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1119 21:57:55.774932  125655 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1119 21:57:55.774939  125655 command_runner.go:130] > conmon = "/usr/bin/conmon"
	I1119 21:57:55.774947  125655 command_runner.go:130] > # Cgroup setting for conmon
	I1119 21:57:55.774955  125655 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1119 21:57:55.774964  125655 command_runner.go:130] > conmon_cgroup = "pod"
	I1119 21:57:55.774974  125655 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1119 21:57:55.774985  125655 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1119 21:57:55.774996  125655 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1119 21:57:55.775012  125655 command_runner.go:130] > conmon_env = [
	I1119 21:57:55.775026  125655 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1119 21:57:55.775031  125655 command_runner.go:130] > ]
	I1119 21:57:55.775043  125655 command_runner.go:130] > # Additional environment variables to set for all the
	I1119 21:57:55.775051  125655 command_runner.go:130] > # containers. These are overridden if set in the
	I1119 21:57:55.775061  125655 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1119 21:57:55.775067  125655 command_runner.go:130] > # default_env = [
	I1119 21:57:55.775073  125655 command_runner.go:130] > # ]
	I1119 21:57:55.775081  125655 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1119 21:57:55.775095  125655 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1119 21:57:55.775101  125655 command_runner.go:130] > # selinux = false
	I1119 21:57:55.775112  125655 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1119 21:57:55.775121  125655 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1119 21:57:55.775133  125655 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1119 21:57:55.775139  125655 command_runner.go:130] > # seccomp_profile = ""
	I1119 21:57:55.775149  125655 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1119 21:57:55.775157  125655 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1119 21:57:55.775167  125655 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1119 21:57:55.775178  125655 command_runner.go:130] > # which might increase security.
	I1119 21:57:55.775185  125655 command_runner.go:130] > # This option is currently deprecated,
	I1119 21:57:55.775195  125655 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1119 21:57:55.775209  125655 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1119 21:57:55.775222  125655 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1119 21:57:55.775232  125655 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1119 21:57:55.775246  125655 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1119 21:57:55.775259  125655 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1119 21:57:55.775271  125655 command_runner.go:130] > # This option supports live configuration reload.
	I1119 21:57:55.775278  125655 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1119 21:57:55.775287  125655 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1119 21:57:55.775299  125655 command_runner.go:130] > # the cgroup blockio controller.
	I1119 21:57:55.775305  125655 command_runner.go:130] > # blockio_config_file = ""
	I1119 21:57:55.775316  125655 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1119 21:57:55.775327  125655 command_runner.go:130] > # blockio parameters.
	I1119 21:57:55.775346  125655 command_runner.go:130] > # blockio_reload = false
	I1119 21:57:55.775357  125655 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1119 21:57:55.775363  125655 command_runner.go:130] > # irqbalance daemon.
	I1119 21:57:55.775379  125655 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1119 21:57:55.775387  125655 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1119 21:57:55.775397  125655 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1119 21:57:55.775412  125655 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1119 21:57:55.775428  125655 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1119 21:57:55.775441  125655 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1119 21:57:55.775450  125655 command_runner.go:130] > # This option supports live configuration reload.
	I1119 21:57:55.775459  125655 command_runner.go:130] > # rdt_config_file = ""
	I1119 21:57:55.775465  125655 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1119 21:57:55.775470  125655 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1119 21:57:55.775549  125655 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1119 21:57:55.775561  125655 command_runner.go:130] > # separate_pull_cgroup = ""
	I1119 21:57:55.775567  125655 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1119 21:57:55.775573  125655 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1119 21:57:55.775576  125655 command_runner.go:130] > # will be added.
	I1119 21:57:55.775579  125655 command_runner.go:130] > # default_capabilities = [
	I1119 21:57:55.775584  125655 command_runner.go:130] > # 	"CHOWN",
	I1119 21:57:55.775591  125655 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1119 21:57:55.775604  125655 command_runner.go:130] > # 	"FSETID",
	I1119 21:57:55.775610  125655 command_runner.go:130] > # 	"FOWNER",
	I1119 21:57:55.775616  125655 command_runner.go:130] > # 	"SETGID",
	I1119 21:57:55.775623  125655 command_runner.go:130] > # 	"SETUID",
	I1119 21:57:55.775628  125655 command_runner.go:130] > # 	"SETPCAP",
	I1119 21:57:55.775634  125655 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1119 21:57:55.775641  125655 command_runner.go:130] > # 	"KILL",
	I1119 21:57:55.775646  125655 command_runner.go:130] > # ]
	I1119 21:57:55.775659  125655 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1119 21:57:55.775666  125655 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1119 21:57:55.775672  125655 command_runner.go:130] > # add_inheritable_capabilities = false
	I1119 21:57:55.775685  125655 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1119 21:57:55.775705  125655 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1119 21:57:55.775716  125655 command_runner.go:130] > default_sysctls = [
	I1119 21:57:55.775723  125655 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1119 21:57:55.775728  125655 command_runner.go:130] > ]
	I1119 21:57:55.775738  125655 command_runner.go:130] > # List of devices on the host that a
	I1119 21:57:55.775747  125655 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1119 21:57:55.775754  125655 command_runner.go:130] > # allowed_devices = [
	I1119 21:57:55.775759  125655 command_runner.go:130] > # 	"/dev/fuse",
	I1119 21:57:55.775766  125655 command_runner.go:130] > # ]
	I1119 21:57:55.775772  125655 command_runner.go:130] > # List of additional devices. specified as
	I1119 21:57:55.775779  125655 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1119 21:57:55.775791  125655 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1119 21:57:55.775801  125655 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1119 21:57:55.775807  125655 command_runner.go:130] > # additional_devices = [
	I1119 21:57:55.775813  125655 command_runner.go:130] > # ]
	I1119 21:57:55.775824  125655 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1119 21:57:55.775830  125655 command_runner.go:130] > # cdi_spec_dirs = [
	I1119 21:57:55.775836  125655 command_runner.go:130] > # 	"/etc/cdi",
	I1119 21:57:55.775845  125655 command_runner.go:130] > # 	"/var/run/cdi",
	I1119 21:57:55.775850  125655 command_runner.go:130] > # ]
	I1119 21:57:55.775860  125655 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1119 21:57:55.775871  125655 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1119 21:57:55.775894  125655 command_runner.go:130] > # Defaults to false.
	I1119 21:57:55.775901  125655 command_runner.go:130] > # device_ownership_from_security_context = false
	I1119 21:57:55.775919  125655 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1119 21:57:55.775932  125655 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1119 21:57:55.775938  125655 command_runner.go:130] > # hooks_dir = [
	I1119 21:57:55.775949  125655 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1119 21:57:55.775954  125655 command_runner.go:130] > # ]
	I1119 21:57:55.775967  125655 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1119 21:57:55.775976  125655 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1119 21:57:55.775986  125655 command_runner.go:130] > # its default mounts from the following two files:
	I1119 21:57:55.775990  125655 command_runner.go:130] > #
	I1119 21:57:55.776006  125655 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1119 21:57:55.776015  125655 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1119 21:57:55.776024  125655 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1119 21:57:55.776032  125655 command_runner.go:130] > #
	I1119 21:57:55.776042  125655 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1119 21:57:55.776054  125655 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1119 21:57:55.776065  125655 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1119 21:57:55.776077  125655 command_runner.go:130] > #      only add mounts it finds in this file.
	I1119 21:57:55.776082  125655 command_runner.go:130] > #
	I1119 21:57:55.776089  125655 command_runner.go:130] > # default_mounts_file = ""
	I1119 21:57:55.776099  125655 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1119 21:57:55.776105  125655 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1119 21:57:55.776113  125655 command_runner.go:130] > pids_limit = 1024
	I1119 21:57:55.776123  125655 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1119 21:57:55.776136  125655 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1119 21:57:55.776145  125655 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1119 21:57:55.776161  125655 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1119 21:57:55.776171  125655 command_runner.go:130] > # log_size_max = -1
	I1119 21:57:55.776181  125655 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1119 21:57:55.776190  125655 command_runner.go:130] > # log_to_journald = false
	I1119 21:57:55.776199  125655 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1119 21:57:55.776213  125655 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1119 21:57:55.776219  125655 command_runner.go:130] > # Path to directory for container attach sockets.
	I1119 21:57:55.776229  125655 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1119 21:57:55.776238  125655 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1119 21:57:55.776248  125655 command_runner.go:130] > # bind_mount_prefix = ""
	I1119 21:57:55.776256  125655 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1119 21:57:55.776266  125655 command_runner.go:130] > # read_only = false
	I1119 21:57:55.776275  125655 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1119 21:57:55.776287  125655 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1119 21:57:55.776293  125655 command_runner.go:130] > # live configuration reload.
	I1119 21:57:55.776299  125655 command_runner.go:130] > # log_level = "info"
	I1119 21:57:55.776309  125655 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1119 21:57:55.776330  125655 command_runner.go:130] > # This option supports live configuration reload.
	I1119 21:57:55.776339  125655 command_runner.go:130] > # log_filter = ""
	I1119 21:57:55.776349  125655 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1119 21:57:55.776364  125655 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1119 21:57:55.776372  125655 command_runner.go:130] > # separated by comma.
	I1119 21:57:55.776384  125655 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1119 21:57:55.776394  125655 command_runner.go:130] > # uid_mappings = ""
	I1119 21:57:55.776403  125655 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1119 21:57:55.776415  125655 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1119 21:57:55.776423  125655 command_runner.go:130] > # separated by comma.
	I1119 21:57:55.776433  125655 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1119 21:57:55.776443  125655 command_runner.go:130] > # gid_mappings = ""
	I1119 21:57:55.776452  125655 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1119 21:57:55.776465  125655 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1119 21:57:55.776478  125655 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1119 21:57:55.776490  125655 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1119 21:57:55.776498  125655 command_runner.go:130] > # minimum_mappable_uid = -1
	I1119 21:57:55.776507  125655 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1119 21:57:55.776520  125655 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1119 21:57:55.776530  125655 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1119 21:57:55.776540  125655 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1119 21:57:55.776548  125655 command_runner.go:130] > # minimum_mappable_gid = -1
	I1119 21:57:55.776557  125655 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1119 21:57:55.776569  125655 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1119 21:57:55.776587  125655 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1119 21:57:55.776597  125655 command_runner.go:130] > # ctr_stop_timeout = 30
	I1119 21:57:55.776607  125655 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1119 21:57:55.776619  125655 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1119 21:57:55.776626  125655 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1119 21:57:55.776637  125655 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1119 21:57:55.776642  125655 command_runner.go:130] > drop_infra_ctr = false
	I1119 21:57:55.776649  125655 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1119 21:57:55.776656  125655 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1119 21:57:55.776678  125655 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1119 21:57:55.776688  125655 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1119 21:57:55.776700  125655 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1119 21:57:55.776712  125655 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1119 21:57:55.776722  125655 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1119 21:57:55.776733  125655 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1119 21:57:55.776739  125655 command_runner.go:130] > # shared_cpuset = ""
	I1119 21:57:55.776751  125655 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1119 21:57:55.776759  125655 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1119 21:57:55.776765  125655 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1119 21:57:55.776775  125655 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1119 21:57:55.776785  125655 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1119 21:57:55.776794  125655 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1119 21:57:55.776810  125655 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1119 21:57:55.776820  125655 command_runner.go:130] > # enable_criu_support = false
	I1119 21:57:55.776828  125655 command_runner.go:130] > # Enable/disable the generation of the container,
	I1119 21:57:55.776840  125655 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1119 21:57:55.776847  125655 command_runner.go:130] > # enable_pod_events = false
	I1119 21:57:55.776856  125655 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1119 21:57:55.776862  125655 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1119 21:57:55.776870  125655 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1119 21:57:55.776886  125655 command_runner.go:130] > # default_runtime = "runc"
	I1119 21:57:55.776895  125655 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1119 21:57:55.776911  125655 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1119 21:57:55.776924  125655 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1119 21:57:55.776935  125655 command_runner.go:130] > # creation as a file is not desired either.
	I1119 21:57:55.776947  125655 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1119 21:57:55.776959  125655 command_runner.go:130] > # the hostname is being managed dynamically.
	I1119 21:57:55.776967  125655 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1119 21:57:55.776970  125655 command_runner.go:130] > # ]
	I1119 21:57:55.776979  125655 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1119 21:57:55.776993  125655 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1119 21:57:55.777006  125655 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1119 21:57:55.777024  125655 command_runner.go:130] > # Each entry in the table should follow the format:
	I1119 21:57:55.777032  125655 command_runner.go:130] > #
	I1119 21:57:55.777040  125655 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1119 21:57:55.777050  125655 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1119 21:57:55.777057  125655 command_runner.go:130] > # runtime_type = "oci"
	I1119 21:57:55.777119  125655 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1119 21:57:55.777131  125655 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1119 21:57:55.777138  125655 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1119 21:57:55.777145  125655 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1119 21:57:55.777152  125655 command_runner.go:130] > # monitor_env = []
	I1119 21:57:55.777163  125655 command_runner.go:130] > # privileged_without_host_devices = false
	I1119 21:57:55.777172  125655 command_runner.go:130] > # allowed_annotations = []
	I1119 21:57:55.777208  125655 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1119 21:57:55.777215  125655 command_runner.go:130] > # Where:
	I1119 21:57:55.777223  125655 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1119 21:57:55.777236  125655 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1119 21:57:55.777247  125655 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1119 21:57:55.777258  125655 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1119 21:57:55.777268  125655 command_runner.go:130] > #   in $PATH.
	I1119 21:57:55.777278  125655 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1119 21:57:55.777288  125655 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1119 21:57:55.777297  125655 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1119 21:57:55.777303  125655 command_runner.go:130] > #   state.
	I1119 21:57:55.777311  125655 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1119 21:57:55.777325  125655 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1119 21:57:55.777335  125655 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1119 21:57:55.777350  125655 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1119 21:57:55.777362  125655 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1119 21:57:55.777373  125655 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1119 21:57:55.777381  125655 command_runner.go:130] > #   The currently recognized values are:
	I1119 21:57:55.777388  125655 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1119 21:57:55.777402  125655 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1119 21:57:55.777414  125655 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1119 21:57:55.777431  125655 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1119 21:57:55.777446  125655 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1119 21:57:55.777459  125655 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1119 21:57:55.777470  125655 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1119 21:57:55.777478  125655 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1119 21:57:55.777484  125655 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1119 21:57:55.777499  125655 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1119 21:57:55.777510  125655 command_runner.go:130] > #   deprecated option "conmon".
	I1119 21:57:55.777521  125655 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1119 21:57:55.777532  125655 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1119 21:57:55.777543  125655 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1119 21:57:55.777553  125655 command_runner.go:130] > #   should be moved to the container's cgroup
	I1119 21:57:55.777567  125655 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1119 21:57:55.777578  125655 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1119 21:57:55.777586  125655 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1119 21:57:55.777593  125655 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1119 21:57:55.777598  125655 command_runner.go:130] > #
	I1119 21:57:55.777606  125655 command_runner.go:130] > # Using the seccomp notifier feature:
	I1119 21:57:55.777612  125655 command_runner.go:130] > #
	I1119 21:57:55.777628  125655 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1119 21:57:55.777638  125655 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1119 21:57:55.777646  125655 command_runner.go:130] > #
	I1119 21:57:55.777655  125655 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1119 21:57:55.777667  125655 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1119 21:57:55.777672  125655 command_runner.go:130] > #
	I1119 21:57:55.777682  125655 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1119 21:57:55.777688  125655 command_runner.go:130] > # feature.
	I1119 21:57:55.777693  125655 command_runner.go:130] > #
	I1119 21:57:55.777701  125655 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1119 21:57:55.777709  125655 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1119 21:57:55.777719  125655 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1119 21:57:55.777728  125655 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1119 21:57:55.777741  125655 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1119 21:57:55.777752  125655 command_runner.go:130] > #
	I1119 21:57:55.777764  125655 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1119 21:57:55.777774  125655 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1119 21:57:55.777781  125655 command_runner.go:130] > #
	I1119 21:57:55.777788  125655 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1119 21:57:55.777794  125655 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1119 21:57:55.777799  125655 command_runner.go:130] > #
	I1119 21:57:55.777804  125655 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1119 21:57:55.777810  125655 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1119 21:57:55.777814  125655 command_runner.go:130] > # limitation.
	I1119 21:57:55.777820  125655 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1119 21:57:55.777824  125655 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1119 21:57:55.777829  125655 command_runner.go:130] > runtime_type = "oci"
	I1119 21:57:55.777835  125655 command_runner.go:130] > runtime_root = "/run/runc"
	I1119 21:57:55.777839  125655 command_runner.go:130] > runtime_config_path = ""
	I1119 21:57:55.777843  125655 command_runner.go:130] > monitor_path = "/usr/bin/conmon"
	I1119 21:57:55.777847  125655 command_runner.go:130] > monitor_cgroup = "pod"
	I1119 21:57:55.777853  125655 command_runner.go:130] > monitor_exec_cgroup = ""
	I1119 21:57:55.777857  125655 command_runner.go:130] > monitor_env = [
	I1119 21:57:55.777862  125655 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1119 21:57:55.777866  125655 command_runner.go:130] > ]
	I1119 21:57:55.777870  125655 command_runner.go:130] > privileged_without_host_devices = false
	I1119 21:57:55.777885  125655 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1119 21:57:55.777890  125655 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1119 21:57:55.777898  125655 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1119 21:57:55.777905  125655 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1119 21:57:55.777915  125655 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1119 21:57:55.777923  125655 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1119 21:57:55.777936  125655 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1119 21:57:55.777946  125655 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1119 21:57:55.777952  125655 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1119 21:57:55.777959  125655 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1119 21:57:55.777963  125655 command_runner.go:130] > # Example:
	I1119 21:57:55.777976  125655 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1119 21:57:55.777983  125655 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1119 21:57:55.777987  125655 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1119 21:57:55.777992  125655 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1119 21:57:55.777995  125655 command_runner.go:130] > # cpuset = 0
	I1119 21:57:55.777999  125655 command_runner.go:130] > # cpushares = "0-1"
	I1119 21:57:55.778002  125655 command_runner.go:130] > # Where:
	I1119 21:57:55.778006  125655 command_runner.go:130] > # The workload name is workload-type.
	I1119 21:57:55.778015  125655 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1119 21:57:55.778020  125655 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1119 21:57:55.778025  125655 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1119 21:57:55.778037  125655 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1119 21:57:55.778043  125655 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1119 21:57:55.778048  125655 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1119 21:57:55.778053  125655 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1119 21:57:55.778058  125655 command_runner.go:130] > # Default value is set to true
	I1119 21:57:55.778062  125655 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1119 21:57:55.778067  125655 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1119 21:57:55.778071  125655 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1119 21:57:55.778075  125655 command_runner.go:130] > # Default value is set to 'false'
	I1119 21:57:55.778079  125655 command_runner.go:130] > # disable_hostport_mapping = false
	I1119 21:57:55.778085  125655 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1119 21:57:55.778090  125655 command_runner.go:130] > #
	I1119 21:57:55.778095  125655 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1119 21:57:55.778101  125655 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1119 21:57:55.778106  125655 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1119 21:57:55.778115  125655 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1119 21:57:55.778120  125655 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1119 21:57:55.778125  125655 command_runner.go:130] > [crio.image]
	I1119 21:57:55.778131  125655 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1119 21:57:55.778135  125655 command_runner.go:130] > # default_transport = "docker://"
	I1119 21:57:55.778140  125655 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1119 21:57:55.778146  125655 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1119 21:57:55.778154  125655 command_runner.go:130] > # global_auth_file = ""
	I1119 21:57:55.778162  125655 command_runner.go:130] > # The image used to instantiate infra containers.
	I1119 21:57:55.778166  125655 command_runner.go:130] > # This option supports live configuration reload.
	I1119 21:57:55.778171  125655 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10.1"
	I1119 21:57:55.778176  125655 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1119 21:57:55.778184  125655 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1119 21:57:55.778189  125655 command_runner.go:130] > # This option supports live configuration reload.
	I1119 21:57:55.778193  125655 command_runner.go:130] > # pause_image_auth_file = ""
	I1119 21:57:55.778201  125655 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1119 21:57:55.778210  125655 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1119 21:57:55.778216  125655 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1119 21:57:55.778221  125655 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1119 21:57:55.778226  125655 command_runner.go:130] > # pause_command = "/pause"
	I1119 21:57:55.778232  125655 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1119 21:57:55.778237  125655 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1119 21:57:55.778244  125655 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1119 21:57:55.778249  125655 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1119 21:57:55.778256  125655 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1119 21:57:55.778264  125655 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1119 21:57:55.778268  125655 command_runner.go:130] > # pinned_images = [
	I1119 21:57:55.778271  125655 command_runner.go:130] > # ]
	I1119 21:57:55.778277  125655 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1119 21:57:55.778283  125655 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1119 21:57:55.778288  125655 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1119 21:57:55.778296  125655 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1119 21:57:55.778301  125655 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1119 21:57:55.778305  125655 command_runner.go:130] > # signature_policy = ""
	I1119 21:57:55.778310  125655 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1119 21:57:55.778316  125655 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1119 21:57:55.778323  125655 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1119 21:57:55.778328  125655 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1119 21:57:55.778336  125655 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1119 21:57:55.778341  125655 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1119 21:57:55.778354  125655 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1119 21:57:55.778360  125655 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1119 21:57:55.778364  125655 command_runner.go:130] > # changing them here.
	I1119 21:57:55.778368  125655 command_runner.go:130] > # insecure_registries = [
	I1119 21:57:55.778371  125655 command_runner.go:130] > # ]
	I1119 21:57:55.778377  125655 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1119 21:57:55.778382  125655 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1119 21:57:55.778386  125655 command_runner.go:130] > # image_volumes = "mkdir"
	I1119 21:57:55.778390  125655 command_runner.go:130] > # Temporary directory to use for storing big files
	I1119 21:57:55.778396  125655 command_runner.go:130] > # big_files_temporary_dir = ""
	I1119 21:57:55.778401  125655 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1119 21:57:55.778405  125655 command_runner.go:130] > # CNI plugins.
	I1119 21:57:55.778411  125655 command_runner.go:130] > [crio.network]
	I1119 21:57:55.778416  125655 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1119 21:57:55.778421  125655 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1119 21:57:55.778427  125655 command_runner.go:130] > # cni_default_network = ""
	I1119 21:57:55.778432  125655 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1119 21:57:55.778436  125655 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1119 21:57:55.778441  125655 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1119 21:57:55.778445  125655 command_runner.go:130] > # plugin_dirs = [
	I1119 21:57:55.778448  125655 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1119 21:57:55.778450  125655 command_runner.go:130] > # ]
	I1119 21:57:55.778461  125655 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1119 21:57:55.778467  125655 command_runner.go:130] > [crio.metrics]
	I1119 21:57:55.778471  125655 command_runner.go:130] > # Globally enable or disable metrics support.
	I1119 21:57:55.778475  125655 command_runner.go:130] > enable_metrics = true
	I1119 21:57:55.778479  125655 command_runner.go:130] > # Specify enabled metrics collectors.
	I1119 21:57:55.778485  125655 command_runner.go:130] > # Per default all metrics are enabled.
	I1119 21:57:55.778491  125655 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1119 21:57:55.778496  125655 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1119 21:57:55.778502  125655 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1119 21:57:55.778506  125655 command_runner.go:130] > # metrics_collectors = [
	I1119 21:57:55.778510  125655 command_runner.go:130] > # 	"operations",
	I1119 21:57:55.778519  125655 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1119 21:57:55.778526  125655 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1119 21:57:55.778529  125655 command_runner.go:130] > # 	"operations_errors",
	I1119 21:57:55.778533  125655 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1119 21:57:55.778537  125655 command_runner.go:130] > # 	"image_pulls_by_name",
	I1119 21:57:55.778541  125655 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1119 21:57:55.778545  125655 command_runner.go:130] > # 	"image_pulls_failures",
	I1119 21:57:55.778551  125655 command_runner.go:130] > # 	"image_pulls_successes",
	I1119 21:57:55.778556  125655 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1119 21:57:55.778559  125655 command_runner.go:130] > # 	"image_layer_reuse",
	I1119 21:57:55.778563  125655 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1119 21:57:55.778567  125655 command_runner.go:130] > # 	"containers_oom_total",
	I1119 21:57:55.778570  125655 command_runner.go:130] > # 	"containers_oom",
	I1119 21:57:55.778574  125655 command_runner.go:130] > # 	"processes_defunct",
	I1119 21:57:55.778578  125655 command_runner.go:130] > # 	"operations_total",
	I1119 21:57:55.778582  125655 command_runner.go:130] > # 	"operations_latency_seconds",
	I1119 21:57:55.778588  125655 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1119 21:57:55.778592  125655 command_runner.go:130] > # 	"operations_errors_total",
	I1119 21:57:55.778596  125655 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1119 21:57:55.778600  125655 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1119 21:57:55.778604  125655 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1119 21:57:55.778611  125655 command_runner.go:130] > # 	"image_pulls_success_total",
	I1119 21:57:55.778614  125655 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1119 21:57:55.778618  125655 command_runner.go:130] > # 	"containers_oom_count_total",
	I1119 21:57:55.778625  125655 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1119 21:57:55.778629  125655 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1119 21:57:55.778632  125655 command_runner.go:130] > # ]
	I1119 21:57:55.778637  125655 command_runner.go:130] > # The port on which the metrics server will listen.
	I1119 21:57:55.778641  125655 command_runner.go:130] > # metrics_port = 9090
	I1119 21:57:55.778645  125655 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1119 21:57:55.778656  125655 command_runner.go:130] > # metrics_socket = ""
	I1119 21:57:55.778660  125655 command_runner.go:130] > # The certificate for the secure metrics server.
	I1119 21:57:55.778665  125655 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1119 21:57:55.778678  125655 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1119 21:57:55.778683  125655 command_runner.go:130] > # certificate on any modification event.
	I1119 21:57:55.778689  125655 command_runner.go:130] > # metrics_cert = ""
	I1119 21:57:55.778694  125655 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1119 21:57:55.778699  125655 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1119 21:57:55.778703  125655 command_runner.go:130] > # metrics_key = ""
	I1119 21:57:55.778708  125655 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1119 21:57:55.778714  125655 command_runner.go:130] > [crio.tracing]
	I1119 21:57:55.778719  125655 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1119 21:57:55.778722  125655 command_runner.go:130] > # enable_tracing = false
	I1119 21:57:55.778729  125655 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1119 21:57:55.778733  125655 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1119 21:57:55.778739  125655 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1119 21:57:55.778746  125655 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1119 21:57:55.778750  125655 command_runner.go:130] > # CRI-O NRI configuration.
	I1119 21:57:55.778753  125655 command_runner.go:130] > [crio.nri]
	I1119 21:57:55.778757  125655 command_runner.go:130] > # Globally enable or disable NRI.
	I1119 21:57:55.778761  125655 command_runner.go:130] > # enable_nri = false
	I1119 21:57:55.778766  125655 command_runner.go:130] > # NRI socket to listen on.
	I1119 21:57:55.778772  125655 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1119 21:57:55.778776  125655 command_runner.go:130] > # NRI plugin directory to use.
	I1119 21:57:55.778783  125655 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1119 21:57:55.778787  125655 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1119 21:57:55.778791  125655 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1119 21:57:55.778796  125655 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1119 21:57:55.778801  125655 command_runner.go:130] > # nri_disable_connections = false
	I1119 21:57:55.778805  125655 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1119 21:57:55.778809  125655 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1119 21:57:55.778814  125655 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1119 21:57:55.778818  125655 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1119 21:57:55.778823  125655 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1119 21:57:55.778826  125655 command_runner.go:130] > [crio.stats]
	I1119 21:57:55.778831  125655 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1119 21:57:55.778844  125655 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1119 21:57:55.778848  125655 command_runner.go:130] > # stats_collection_period = 0
	I1119 21:57:55.778894  125655 command_runner.go:130] ! time="2025-11-19 21:57:55.755704188Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1119 21:57:55.778909  125655 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1119 21:57:55.779016  125655 cni.go:84] Creating CNI manager for ""
	I1119 21:57:55.779032  125655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1119 21:57:55.779052  125655 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 21:57:55.779081  125655 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.56 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-274272 NodeName:functional-274272 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.56"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.56 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 21:57:55.779230  125655 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.56
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-274272"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.56"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.56"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 21:57:55.779314  125655 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 21:57:55.791865  125655 command_runner.go:130] > kubeadm
	I1119 21:57:55.791898  125655 command_runner.go:130] > kubectl
	I1119 21:57:55.791902  125655 command_runner.go:130] > kubelet
	I1119 21:57:55.792339  125655 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 21:57:55.792402  125655 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 21:57:55.804500  125655 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1119 21:57:55.831211  125655 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 21:57:55.857336  125655 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1119 21:57:55.883585  125655 ssh_runner.go:195] Run: grep 192.168.39.56	control-plane.minikube.internal$ /etc/hosts
	I1119 21:57:55.888240  125655 command_runner.go:130] > 192.168.39.56	control-plane.minikube.internal
	I1119 21:57:55.888422  125655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 21:57:56.081809  125655 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 21:57:56.102791  125655 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272 for IP: 192.168.39.56
	I1119 21:57:56.102821  125655 certs.go:195] generating shared ca certs ...
	I1119 21:57:56.102844  125655 certs.go:227] acquiring lock for ca certs: {Name:mk7fd1f1adfef6333505c39c1a982465562c820e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:57:56.103063  125655 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key
	I1119 21:57:56.103136  125655 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key
	I1119 21:57:56.103152  125655 certs.go:257] generating profile certs ...
	I1119 21:57:56.103293  125655 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/client.key
	I1119 21:57:56.103368  125655 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/apiserver.key.ff709108
	I1119 21:57:56.103443  125655 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/proxy-client.key
	I1119 21:57:56.103459  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1119 21:57:56.103484  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1119 21:57:56.103511  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1119 21:57:56.103529  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1119 21:57:56.103543  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1119 21:57:56.103561  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1119 21:57:56.103579  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1119 21:57:56.103596  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1119 21:57:56.103672  125655 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem (1338 bytes)
	W1119 21:57:56.103719  125655 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369_empty.pem, impossibly tiny 0 bytes
	I1119 21:57:56.103738  125655 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 21:57:56.103773  125655 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem (1078 bytes)
	I1119 21:57:56.103801  125655 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem (1123 bytes)
	I1119 21:57:56.103827  125655 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem (1675 bytes)
	I1119 21:57:56.103904  125655 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 21:57:56.103946  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /usr/share/ca-certificates/1213692.pem
	I1119 21:57:56.103967  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:57:56.103983  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem -> /usr/share/ca-certificates/121369.pem
	I1119 21:57:56.104844  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 21:57:56.137315  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 21:57:56.170238  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 21:57:56.201511  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 21:57:56.232500  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 21:57:56.263196  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 21:57:56.293733  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 21:57:56.325433  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 21:57:56.358184  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /usr/share/ca-certificates/1213692.pem (1708 bytes)
	I1119 21:57:56.390372  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 21:57:56.421898  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem --> /usr/share/ca-certificates/121369.pem (1338 bytes)
	I1119 21:57:56.453376  125655 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 21:57:56.475453  125655 ssh_runner.go:195] Run: openssl version
	I1119 21:57:56.482740  125655 command_runner.go:130] > OpenSSL 3.4.1 11 Feb 2025 (Library: OpenSSL 3.4.1 11 Feb 2025)
	I1119 21:57:56.482959  125655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 21:57:56.496693  125655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:57:56.502262  125655 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:57:56.502418  125655 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:57:56.502483  125655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:57:56.510342  125655 command_runner.go:130] > b5213941
	I1119 21:57:56.510469  125655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 21:57:56.522344  125655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121369.pem && ln -fs /usr/share/ca-certificates/121369.pem /etc/ssl/certs/121369.pem"
	I1119 21:57:56.536631  125655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121369.pem
	I1119 21:57:56.542029  125655 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 19 21:54 /usr/share/ca-certificates/121369.pem
	I1119 21:57:56.542209  125655 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:54 /usr/share/ca-certificates/121369.pem
	I1119 21:57:56.542274  125655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121369.pem
	I1119 21:57:56.550380  125655 command_runner.go:130] > 51391683
	I1119 21:57:56.550501  125655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/121369.pem /etc/ssl/certs/51391683.0"
	I1119 21:57:56.561821  125655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1213692.pem && ln -fs /usr/share/ca-certificates/1213692.pem /etc/ssl/certs/1213692.pem"
	I1119 21:57:56.575290  125655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1213692.pem
	I1119 21:57:56.580784  125655 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 19 21:54 /usr/share/ca-certificates/1213692.pem
	I1119 21:57:56.581086  125655 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:54 /usr/share/ca-certificates/1213692.pem
	I1119 21:57:56.581144  125655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1213692.pem
	I1119 21:57:56.588956  125655 command_runner.go:130] > 3ec20f2e
	I1119 21:57:56.589037  125655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1213692.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 21:57:56.601212  125655 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 21:57:56.606946  125655 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 21:57:56.606978  125655 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1119 21:57:56.606987  125655 command_runner.go:130] > Device: 253,1	Inode: 9430692     Links: 1
	I1119 21:57:56.606996  125655 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1119 21:57:56.607006  125655 command_runner.go:130] > Access: 2025-11-19 21:55:24.817391754 +0000
	I1119 21:57:56.607014  125655 command_runner.go:130] > Modify: 2025-11-19 21:55:24.817391754 +0000
	I1119 21:57:56.607022  125655 command_runner.go:130] > Change: 2025-11-19 21:55:24.817391754 +0000
	I1119 21:57:56.607031  125655 command_runner.go:130] >  Birth: 2025-11-19 21:55:24.817391754 +0000
	I1119 21:57:56.607101  125655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 21:57:56.614717  125655 command_runner.go:130] > Certificate will not expire
	I1119 21:57:56.614807  125655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 21:57:56.622151  125655 command_runner.go:130] > Certificate will not expire
	I1119 21:57:56.622364  125655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 21:57:56.629649  125655 command_runner.go:130] > Certificate will not expire
	I1119 21:57:56.630010  125655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 21:57:56.637598  125655 command_runner.go:130] > Certificate will not expire
	I1119 21:57:56.637675  125655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 21:57:56.645478  125655 command_runner.go:130] > Certificate will not expire
	I1119 21:57:56.645584  125655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 21:57:56.652788  125655 command_runner.go:130] > Certificate will not expire
	I1119 21:57:56.653060  125655 kubeadm.go:401] StartCluster: {Name:functional-274272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:functional-274272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:57:56.653151  125655 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:57:56.653212  125655 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:57:56.693386  125655 command_runner.go:130] > 0106e4f2ce61898cba6e2b7a948933217c923a83f47dc98f8e443135d7c1953c
	I1119 21:57:56.693418  125655 command_runner.go:130] > 27fcf5ffa9c5cce8c4adcef8f00caed64a4fab40166ca861d697c36836f01dc9
	I1119 21:57:56.693423  125655 command_runner.go:130] > 33d592c056efc2e2c713428bd1a12974c07ffee6ed0d926d5ef6cd0cca4db55d
	I1119 21:57:56.693431  125655 command_runner.go:130] > 899acc3a073d3b1e8a64e329e67a0c9a8014c3e5fb818300620298c210fd33f1
	I1119 21:57:56.693436  125655 command_runner.go:130] > cb88ba1c8d8cc6830004837e32b6220121207668f01c3560a51e2a8ce1e36ed3
	I1119 21:57:56.693441  125655 command_runner.go:130] > 077ec5fba8a3dcc009b2738e8bb85762db7ca0d2d2f4153471471dce4bb69d58
	I1119 21:57:56.693445  125655 command_runner.go:130] > f188ee072392f6539a4fc0dbc95ec1b18f76377b44c1ca821fc51f07d6c4ec6b
	I1119 21:57:56.693453  125655 command_runner.go:130] > 94bc4b1fc9ffc618caea337c906ff95370af6d466ae4d510d6d81512364b13b1
	I1119 21:57:56.693458  125655 command_runner.go:130] > 8a2841a674b031e27e3be5469070765869d325fd68f393a4e80843dbe974314f
	I1119 21:57:56.693463  125655 command_runner.go:130] > 37c3495f685326ea33fc62203369d3d57a6b53431dd8b780278885837239fdfc
	I1119 21:57:56.693468  125655 command_runner.go:130] > 931760b60afe373933492aa207a6c5231c4e48d65a493c258f05f5ff220173d5
	I1119 21:57:56.693473  125655 command_runner.go:130] > d2921c6f5f07b3d3a3ca4ccf47283bb6bb22d080b6da461bd67a7b1660db1191
	I1119 21:57:56.695025  125655 cri.go:89] found id: "0106e4f2ce61898cba6e2b7a948933217c923a83f47dc98f8e443135d7c1953c"
	I1119 21:57:56.695043  125655 cri.go:89] found id: "27fcf5ffa9c5cce8c4adcef8f00caed64a4fab40166ca861d697c36836f01dc9"
	I1119 21:57:56.695048  125655 cri.go:89] found id: "33d592c056efc2e2c713428bd1a12974c07ffee6ed0d926d5ef6cd0cca4db55d"
	I1119 21:57:56.695053  125655 cri.go:89] found id: "899acc3a073d3b1e8a64e329e67a0c9a8014c3e5fb818300620298c210fd33f1"
	I1119 21:57:56.695056  125655 cri.go:89] found id: "cb88ba1c8d8cc6830004837e32b6220121207668f01c3560a51e2a8ce1e36ed3"
	I1119 21:57:56.695061  125655 cri.go:89] found id: "077ec5fba8a3dcc009b2738e8bb85762db7ca0d2d2f4153471471dce4bb69d58"
	I1119 21:57:56.695067  125655 cri.go:89] found id: "f188ee072392f6539a4fc0dbc95ec1b18f76377b44c1ca821fc51f07d6c4ec6b"
	I1119 21:57:56.695072  125655 cri.go:89] found id: "94bc4b1fc9ffc618caea337c906ff95370af6d466ae4d510d6d81512364b13b1"
	I1119 21:57:56.695077  125655 cri.go:89] found id: "8a2841a674b031e27e3be5469070765869d325fd68f393a4e80843dbe974314f"
	I1119 21:57:56.695088  125655 cri.go:89] found id: "37c3495f685326ea33fc62203369d3d57a6b53431dd8b780278885837239fdfc"
	I1119 21:57:56.695093  125655 cri.go:89] found id: "931760b60afe373933492aa207a6c5231c4e48d65a493c258f05f5ff220173d5"
	I1119 21:57:56.695097  125655 cri.go:89] found id: "d2921c6f5f07b3d3a3ca4ccf47283bb6bb22d080b6da461bd67a7b1660db1191"
	I1119 21:57:56.695101  125655 cri.go:89] found id: ""
	I1119 21:57:56.695155  125655 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-274272 -n functional-274272
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-274272 -n functional-274272: exit status 2 (203.569702ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-274272" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (1577.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (742.7s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-274272 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-274272 get po -A: exit status 1 (52.22539ms)

                                                
                                                
** stderr ** 
	E1119 22:22:32.598834  131428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.56:8441/api?timeout=32s\": dial tcp 192.168.39.56:8441: connect: connection refused"
	E1119 22:22:32.599401  131428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.56:8441/api?timeout=32s\": dial tcp 192.168.39.56:8441: connect: connection refused"
	E1119 22:22:32.600384  131428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.56:8441/api?timeout=32s\": dial tcp 192.168.39.56:8441: connect: connection refused"
	E1119 22:22:32.600804  131428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.56:8441/api?timeout=32s\": dial tcp 192.168.39.56:8441: connect: connection refused"
	E1119 22:22:32.602262  131428 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.39.56:8441/api?timeout=32s\": dial tcp 192.168.39.56:8441: connect: connection refused"
	The connection to the server 192.168.39.56:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-274272 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"E1119 22:22:32.598834  131428 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.39.56:8441/api?timeout=32s\\\": dial tcp 192.168.39.56:8441: connect: connection refused\"\nE1119 22:22:32.599401  131428 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.39.56:8441/api?timeout=32s\\\": dial tcp 192.168.39.56:8441: connect: connection refused\"\nE1119 22:22:32.600384  131428 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.39.56:8441/api?timeout=32s\\\": dial tcp 192.168.39.56:8441: connect: connection refused\"\nE1119 22:22:32.600804  131428 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.39.56:8441/api?timeout=32s\\\": dial tcp 192.168.39.56:8441: connect: connection refused\"\nE1119 22:22:32.602262  131
428 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.39.56:8441/api?timeout=32s\\\": dial tcp 192.168.39.56:8441: connect: connection refused\"\nThe connection to the server 192.168.39.56:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-274272 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-274272 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-274272 -n functional-274272
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-274272 -n functional-274272: exit status 2 (196.069834ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-274272 logs -n 25
E1119 22:24:25.103108  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:27:28.177121  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:29:25.093859  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:34:25.102103  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-274272 logs -n 25: (12m22.181063451s)
helpers_test.go:260: TestFunctional/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
	โ”‚ COMMAND โ”‚                                                         ARGS                                                          โ”‚      PROFILE      โ”‚  USER   โ”‚ VERSION โ”‚     START TIME      โ”‚      END TIME       โ”‚
	โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
	โ”‚ addons  โ”‚ addons-638975 addons disable csi-hostpath-driver --alsologtostderr -v=1                                               โ”‚ addons-638975     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:50 UTC โ”‚ 19 Nov 25 21:50 UTC โ”‚
	โ”‚ ip      โ”‚ addons-638975 ip                                                                                                      โ”‚ addons-638975     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:52 UTC โ”‚ 19 Nov 25 21:52 UTC โ”‚
	โ”‚ addons  โ”‚ addons-638975 addons disable ingress-dns --alsologtostderr -v=1                                                       โ”‚ addons-638975     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:52 UTC โ”‚ 19 Nov 25 21:52 UTC โ”‚
	โ”‚ addons  โ”‚ addons-638975 addons disable ingress --alsologtostderr -v=1                                                           โ”‚ addons-638975     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:52 UTC โ”‚ 19 Nov 25 21:52 UTC โ”‚
	โ”‚ stop    โ”‚ -p addons-638975                                                                                                      โ”‚ addons-638975     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:52 UTC โ”‚ 19 Nov 25 21:54 UTC โ”‚
	โ”‚ addons  โ”‚ enable dashboard -p addons-638975                                                                                     โ”‚ addons-638975     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚ 19 Nov 25 21:54 UTC โ”‚
	โ”‚ addons  โ”‚ disable dashboard -p addons-638975                                                                                    โ”‚ addons-638975     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚ 19 Nov 25 21:54 UTC โ”‚
	โ”‚ addons  โ”‚ disable gvisor -p addons-638975                                                                                       โ”‚ addons-638975     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚ 19 Nov 25 21:54 UTC โ”‚
	โ”‚ delete  โ”‚ -p addons-638975                                                                                                      โ”‚ addons-638975     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚ 19 Nov 25 21:54 UTC โ”‚
	โ”‚ start   โ”‚ -p nospam-527873 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-527873 --driver=kvm2  --container-runtime=crio โ”‚ nospam-527873     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚ 19 Nov 25 21:54 UTC โ”‚
	โ”‚ start   โ”‚ nospam-527873 --log_dir /tmp/nospam-527873 start --dry-run                                                            โ”‚ nospam-527873     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚                     โ”‚
	โ”‚ start   โ”‚ nospam-527873 --log_dir /tmp/nospam-527873 start --dry-run                                                            โ”‚ nospam-527873     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚                     โ”‚
	โ”‚ start   โ”‚ nospam-527873 --log_dir /tmp/nospam-527873 start --dry-run                                                            โ”‚ nospam-527873     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚                     โ”‚
	โ”‚ pause   โ”‚ nospam-527873 --log_dir /tmp/nospam-527873 pause                                                                      โ”‚ nospam-527873     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚ 19 Nov 25 21:54 UTC โ”‚
	โ”‚ pause   โ”‚ nospam-527873 --log_dir /tmp/nospam-527873 pause                                                                      โ”‚ nospam-527873     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚ 19 Nov 25 21:54 UTC โ”‚
	โ”‚ pause   โ”‚ nospam-527873 --log_dir /tmp/nospam-527873 pause                                                                      โ”‚ nospam-527873     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚ 19 Nov 25 21:54 UTC โ”‚
	โ”‚ unpause โ”‚ nospam-527873 --log_dir /tmp/nospam-527873 unpause                                                                    โ”‚ nospam-527873     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚ 19 Nov 25 21:54 UTC โ”‚
	โ”‚ unpause โ”‚ nospam-527873 --log_dir /tmp/nospam-527873 unpause                                                                    โ”‚ nospam-527873     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚ 19 Nov 25 21:54 UTC โ”‚
	โ”‚ unpause โ”‚ nospam-527873 --log_dir /tmp/nospam-527873 unpause                                                                    โ”‚ nospam-527873     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚ 19 Nov 25 21:54 UTC โ”‚
	โ”‚ stop    โ”‚ nospam-527873 --log_dir /tmp/nospam-527873 stop                                                                       โ”‚ nospam-527873     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚ 19 Nov 25 21:54 UTC โ”‚
	โ”‚ stop    โ”‚ nospam-527873 --log_dir /tmp/nospam-527873 stop                                                                       โ”‚ nospam-527873     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚ 19 Nov 25 21:54 UTC โ”‚
	โ”‚ stop    โ”‚ nospam-527873 --log_dir /tmp/nospam-527873 stop                                                                       โ”‚ nospam-527873     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚ 19 Nov 25 21:54 UTC โ”‚
	โ”‚ delete  โ”‚ -p nospam-527873                                                                                                      โ”‚ nospam-527873     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚ 19 Nov 25 21:54 UTC โ”‚
	โ”‚ start   โ”‚ -p functional-274272 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio           โ”‚ functional-274272 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:54 UTC โ”‚ 19 Nov 25 21:56 UTC โ”‚
	โ”‚ start   โ”‚ -p functional-274272 --alsologtostderr -v=8                                                                           โ”‚ functional-274272 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:56 UTC โ”‚                     โ”‚
	โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 21:56:15
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 21:56:15.524505  125655 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:56:15.524640  125655 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:56:15.524650  125655 out.go:374] Setting ErrFile to fd 2...
	I1119 21:56:15.524653  125655 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:56:15.524902  125655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-117497/.minikube/bin
	I1119 21:56:15.525357  125655 out.go:368] Setting JSON to false
	I1119 21:56:15.526238  125655 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":13122,"bootTime":1763576253,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 21:56:15.526344  125655 start.go:143] virtualization: kvm guest
	I1119 21:56:15.529220  125655 out.go:179] * [functional-274272] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 21:56:15.530888  125655 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 21:56:15.530892  125655 notify.go:221] Checking for updates...
	I1119 21:56:15.533320  125655 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 21:56:15.534592  125655 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 21:56:15.535896  125655 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-117497/.minikube
	I1119 21:56:15.537284  125655 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 21:56:15.538692  125655 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 21:56:15.540498  125655 config.go:182] Loaded profile config "functional-274272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:56:15.540627  125655 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 21:56:15.578138  125655 out.go:179] * Using the kvm2 driver based on existing profile
	I1119 21:56:15.579740  125655 start.go:309] selected driver: kvm2
	I1119 21:56:15.579759  125655 start.go:930] validating driver "kvm2" against &{Name:functional-274272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-274272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:56:15.579860  125655 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 21:56:15.580973  125655 cni.go:84] Creating CNI manager for ""
	I1119 21:56:15.581058  125655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1119 21:56:15.581134  125655 start.go:353] cluster config:
	{Name:functional-274272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-274272 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:56:15.581282  125655 iso.go:125] acquiring lock: {Name:mk95e5a645bfae75190ef550e02bd4f48b331040 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 21:56:15.582970  125655 out.go:179] * Starting "functional-274272" primary control-plane node in "functional-274272" cluster
	I1119 21:56:15.584343  125655 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 21:56:15.584377  125655 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 21:56:15.584386  125655 cache.go:65] Caching tarball of preloaded images
	I1119 21:56:15.584490  125655 preload.go:238] Found /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 21:56:15.584505  125655 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 21:56:15.584592  125655 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/config.json ...
	I1119 21:56:15.584871  125655 start.go:360] acquireMachinesLock for functional-274272: {Name:mk31ae761a53f3900bcf08a8c89eb1fc69e23fe3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1119 21:56:15.584938  125655 start.go:364] duration metric: took 31.116ยตs to acquireMachinesLock for "functional-274272"
	I1119 21:56:15.584961  125655 start.go:96] Skipping create...Using existing machine configuration
	I1119 21:56:15.584971  125655 fix.go:54] fixHost starting: 
	I1119 21:56:15.587127  125655 fix.go:112] recreateIfNeeded on functional-274272: state=Running err=<nil>
	W1119 21:56:15.587160  125655 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 21:56:15.589621  125655 out.go:252] * Updating the running kvm2 "functional-274272" VM ...
	I1119 21:56:15.589658  125655 machine.go:94] provisionDockerMachine start ...
	I1119 21:56:15.592549  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:15.593154  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:15.593187  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:15.593360  125655 main.go:143] libmachine: Using SSH client type: native
	I1119 21:56:15.593603  125655 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1119 21:56:15.593618  125655 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 21:56:15.702194  125655 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-274272
	
	I1119 21:56:15.702244  125655 buildroot.go:166] provisioning hostname "functional-274272"
	I1119 21:56:15.705141  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:15.705571  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:15.705614  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:15.705842  125655 main.go:143] libmachine: Using SSH client type: native
	I1119 21:56:15.706110  125655 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1119 21:56:15.706125  125655 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-274272 && echo "functional-274272" | sudo tee /etc/hostname
	I1119 21:56:15.846160  125655 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-274272
	
	I1119 21:56:15.849601  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:15.850076  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:15.850116  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:15.850306  125655 main.go:143] libmachine: Using SSH client type: native
	I1119 21:56:15.850538  125655 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1119 21:56:15.850562  125655 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-274272' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-274272/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-274272' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 21:56:15.958572  125655 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 21:56:15.958602  125655 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21918-117497/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-117497/.minikube}
	I1119 21:56:15.958621  125655 buildroot.go:174] setting up certificates
	I1119 21:56:15.958644  125655 provision.go:84] configureAuth start
	I1119 21:56:15.961541  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:15.961948  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:15.961978  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:15.964387  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:15.964833  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:15.964860  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:15.965011  125655 provision.go:143] copyHostCerts
	I1119 21:56:15.965045  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 21:56:15.965088  125655 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem, removing ...
	I1119 21:56:15.965106  125655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 21:56:15.965186  125655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem (1123 bytes)
	I1119 21:56:15.965327  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 21:56:15.965363  125655 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem, removing ...
	I1119 21:56:15.965371  125655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 21:56:15.965420  125655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem (1675 bytes)
	I1119 21:56:15.965509  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 21:56:15.965533  125655 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem, removing ...
	I1119 21:56:15.965543  125655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 21:56:15.965592  125655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem (1078 bytes)
	I1119 21:56:15.965675  125655 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem org=jenkins.functional-274272 san=[127.0.0.1 192.168.39.56 functional-274272 localhost minikube]
	I1119 21:56:16.178107  125655 provision.go:177] copyRemoteCerts
	I1119 21:56:16.178177  125655 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 21:56:16.180523  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:16.180929  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:16.180960  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:16.181094  125655 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/functional-274272/id_rsa Username:docker}
	I1119 21:56:16.267429  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1119 21:56:16.267516  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 21:56:16.303049  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1119 21:56:16.303134  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 21:56:16.336134  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1119 21:56:16.336220  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 21:56:16.369360  125655 provision.go:87] duration metric: took 410.702355ms to configureAuth
	I1119 21:56:16.369395  125655 buildroot.go:189] setting minikube options for container-runtime
	I1119 21:56:16.369609  125655 config.go:182] Loaded profile config "functional-274272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:56:16.372543  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:16.372941  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:16.372970  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:16.373148  125655 main.go:143] libmachine: Using SSH client type: native
	I1119 21:56:16.373382  125655 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1119 21:56:16.373404  125655 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 21:56:21.981912  125655 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 21:56:21.981950  125655 machine.go:97] duration metric: took 6.392282192s to provisionDockerMachine
	I1119 21:56:21.981967  125655 start.go:293] postStartSetup for "functional-274272" (driver="kvm2")
	I1119 21:56:21.981980  125655 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 21:56:21.982049  125655 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 21:56:21.985113  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:21.985484  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:21.985537  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:21.985749  125655 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/functional-274272/id_rsa Username:docker}
	I1119 21:56:22.102924  125655 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 21:56:22.114116  125655 command_runner.go:130] > NAME=Buildroot
	I1119 21:56:22.114134  125655 command_runner.go:130] > VERSION=2025.02-dirty
	I1119 21:56:22.114138  125655 command_runner.go:130] > ID=buildroot
	I1119 21:56:22.114143  125655 command_runner.go:130] > VERSION_ID=2025.02
	I1119 21:56:22.114148  125655 command_runner.go:130] > PRETTY_NAME="Buildroot 2025.02"
	I1119 21:56:22.114192  125655 info.go:137] Remote host: Buildroot 2025.02
	I1119 21:56:22.114211  125655 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/addons for local assets ...
	I1119 21:56:22.114270  125655 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/files for local assets ...
	I1119 21:56:22.114383  125655 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> 1213692.pem in /etc/ssl/certs
	I1119 21:56:22.114400  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /etc/ssl/certs/1213692.pem
	I1119 21:56:22.114498  125655 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/test/nested/copy/121369/hosts -> hosts in /etc/test/nested/copy/121369
	I1119 21:56:22.114510  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/test/nested/copy/121369/hosts -> /etc/test/nested/copy/121369/hosts
	I1119 21:56:22.114560  125655 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/121369
	I1119 21:56:22.154301  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 21:56:22.234489  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/test/nested/copy/121369/hosts --> /etc/test/nested/copy/121369/hosts (40 bytes)
	I1119 21:56:22.328918  125655 start.go:296] duration metric: took 346.928603ms for postStartSetup
	I1119 21:56:22.328975  125655 fix.go:56] duration metric: took 6.74400308s for fixHost
	I1119 21:56:22.332245  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:22.332719  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:22.332761  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:22.333032  125655 main.go:143] libmachine: Using SSH client type: native
	I1119 21:56:22.333335  125655 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I1119 21:56:22.333355  125655 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1119 21:56:22.524275  125655 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763589382.515165407
	
	I1119 21:56:22.524306  125655 fix.go:216] guest clock: 1763589382.515165407
	I1119 21:56:22.524317  125655 fix.go:229] Guest: 2025-11-19 21:56:22.515165407 +0000 UTC Remote: 2025-11-19 21:56:22.328982326 +0000 UTC m=+6.856474824 (delta=186.183081ms)
	I1119 21:56:22.524340  125655 fix.go:200] guest clock delta is within tolerance: 186.183081ms
	I1119 21:56:22.524348  125655 start.go:83] releasing machines lock for "functional-274272", held for 6.939395313s
	I1119 21:56:22.527518  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:22.527977  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:22.528013  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:22.528866  125655 ssh_runner.go:195] Run: cat /version.json
	I1119 21:56:22.528919  125655 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 21:56:22.532219  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:22.532345  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:22.532671  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:22.532706  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:22.532818  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:56:22.532846  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:56:22.532915  125655 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/functional-274272/id_rsa Username:docker}
	I1119 21:56:22.533175  125655 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/functional-274272/id_rsa Username:docker}
	I1119 21:56:22.669778  125655 command_runner.go:130] > {"iso_version": "v1.37.0-1763575914-21918", "kicbase_version": "v0.0.48-1763561786-21918", "minikube_version": "v1.37.0", "commit": "425f5f15185086235ffd9f03de5624881b145800"}
	I1119 21:56:22.670044  125655 ssh_runner.go:195] Run: systemctl --version
	I1119 21:56:22.709748  125655 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1119 21:56:22.714621  125655 command_runner.go:130] > systemd 256 (256.7)
	I1119 21:56:22.714659  125655 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP -LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT -LIBARCHIVE
	I1119 21:56:22.714732  125655 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 21:56:22.954339  125655 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1119 21:56:22.974290  125655 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1119 21:56:22.977797  125655 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 21:56:22.977901  125655 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 21:56:23.008282  125655 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 21:56:23.008311  125655 start.go:496] detecting cgroup driver to use...
	I1119 21:56:23.008412  125655 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 21:56:23.105440  125655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 21:56:23.135903  125655 docker.go:218] disabling cri-docker service (if available) ...
	I1119 21:56:23.135971  125655 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 21:56:23.175334  125655 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 21:56:23.257801  125655 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 21:56:23.591204  125655 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 21:56:23.882315  125655 docker.go:234] disabling docker service ...
	I1119 21:56:23.882405  125655 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 21:56:23.921893  125655 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 21:56:23.944430  125655 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 21:56:24.230558  125655 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 21:56:24.528514  125655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 21:56:24.549079  125655 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 21:56:24.594053  125655 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1119 21:56:24.595417  125655 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 21:56:24.595501  125655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:56:24.619383  125655 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 21:56:24.619478  125655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:56:24.644219  125655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:56:24.664023  125655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:56:24.686767  125655 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 21:56:24.708545  125655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:56:24.732834  125655 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:56:24.757166  125655 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:56:24.777845  125655 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 21:56:24.796276  125655 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1119 21:56:24.796965  125655 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 21:56:24.817150  125655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 21:56:25.056155  125655 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 21:57:55.500242  125655 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.444033021s)
	I1119 21:57:55.500288  125655 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 21:57:55.500356  125655 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 21:57:55.507439  125655 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1119 21:57:55.507472  125655 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1119 21:57:55.507489  125655 command_runner.go:130] > Device: 0,23	Inode: 1960        Links: 1
	I1119 21:57:55.507496  125655 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1119 21:57:55.507501  125655 command_runner.go:130] > Access: 2025-11-19 21:57:55.296187903 +0000
	I1119 21:57:55.507518  125655 command_runner.go:130] > Modify: 2025-11-19 21:57:55.296187903 +0000
	I1119 21:57:55.507523  125655 command_runner.go:130] > Change: 2025-11-19 21:57:55.296187903 +0000
	I1119 21:57:55.507528  125655 command_runner.go:130] >  Birth: 2025-11-19 21:57:55.296187903 +0000
	I1119 21:57:55.507551  125655 start.go:564] Will wait 60s for crictl version
	I1119 21:57:55.507616  125655 ssh_runner.go:195] Run: which crictl
	I1119 21:57:55.512454  125655 command_runner.go:130] > /usr/bin/crictl
	I1119 21:57:55.512630  125655 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1119 21:57:55.557330  125655 command_runner.go:130] > Version:  0.1.0
	I1119 21:57:55.557354  125655 command_runner.go:130] > RuntimeName:  cri-o
	I1119 21:57:55.557359  125655 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1119 21:57:55.557366  125655 command_runner.go:130] > RuntimeApiVersion:  v1
	I1119 21:57:55.557387  125655 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1119 21:57:55.557484  125655 ssh_runner.go:195] Run: crio --version
	I1119 21:57:55.589692  125655 command_runner.go:130] > crio version 1.29.1
	I1119 21:57:55.589714  125655 command_runner.go:130] > Version:        1.29.1
	I1119 21:57:55.589733  125655 command_runner.go:130] > GitCommit:      unknown
	I1119 21:57:55.589738  125655 command_runner.go:130] > GitCommitDate:  unknown
	I1119 21:57:55.589742  125655 command_runner.go:130] > GitTreeState:   clean
	I1119 21:57:55.589748  125655 command_runner.go:130] > BuildDate:      2025-11-19T21:18:08Z
	I1119 21:57:55.589752  125655 command_runner.go:130] > GoVersion:      go1.23.4
	I1119 21:57:55.589755  125655 command_runner.go:130] > Compiler:       gc
	I1119 21:57:55.589760  125655 command_runner.go:130] > Platform:       linux/amd64
	I1119 21:57:55.589763  125655 command_runner.go:130] > Linkmode:       dynamic
	I1119 21:57:55.589779  125655 command_runner.go:130] > BuildTags:      
	I1119 21:57:55.589785  125655 command_runner.go:130] >   containers_image_ostree_stub
	I1119 21:57:55.589789  125655 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1119 21:57:55.589793  125655 command_runner.go:130] >   btrfs_noversion
	I1119 21:57:55.589798  125655 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1119 21:57:55.589802  125655 command_runner.go:130] >   libdm_no_deferred_remove
	I1119 21:57:55.589809  125655 command_runner.go:130] >   seccomp
	I1119 21:57:55.589813  125655 command_runner.go:130] > LDFlags:          unknown
	I1119 21:57:55.589817  125655 command_runner.go:130] > SeccompEnabled:   true
	I1119 21:57:55.589824  125655 command_runner.go:130] > AppArmorEnabled:  false
	I1119 21:57:55.590824  125655 ssh_runner.go:195] Run: crio --version
	I1119 21:57:55.623734  125655 command_runner.go:130] > crio version 1.29.1
	I1119 21:57:55.623758  125655 command_runner.go:130] > Version:        1.29.1
	I1119 21:57:55.623767  125655 command_runner.go:130] > GitCommit:      unknown
	I1119 21:57:55.623773  125655 command_runner.go:130] > GitCommitDate:  unknown
	I1119 21:57:55.623778  125655 command_runner.go:130] > GitTreeState:   clean
	I1119 21:57:55.623785  125655 command_runner.go:130] > BuildDate:      2025-11-19T21:18:08Z
	I1119 21:57:55.623791  125655 command_runner.go:130] > GoVersion:      go1.23.4
	I1119 21:57:55.623797  125655 command_runner.go:130] > Compiler:       gc
	I1119 21:57:55.623803  125655 command_runner.go:130] > Platform:       linux/amd64
	I1119 21:57:55.623808  125655 command_runner.go:130] > Linkmode:       dynamic
	I1119 21:57:55.623815  125655 command_runner.go:130] > BuildTags:      
	I1119 21:57:55.623822  125655 command_runner.go:130] >   containers_image_ostree_stub
	I1119 21:57:55.623832  125655 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1119 21:57:55.623838  125655 command_runner.go:130] >   btrfs_noversion
	I1119 21:57:55.623847  125655 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1119 21:57:55.623868  125655 command_runner.go:130] >   libdm_no_deferred_remove
	I1119 21:57:55.623897  125655 command_runner.go:130] >   seccomp
	I1119 21:57:55.623907  125655 command_runner.go:130] > LDFlags:          unknown
	I1119 21:57:55.623914  125655 command_runner.go:130] > SeccompEnabled:   true
	I1119 21:57:55.623922  125655 command_runner.go:130] > AppArmorEnabled:  false
	I1119 21:57:55.626580  125655 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1119 21:57:55.630696  125655 main.go:143] libmachine: domain functional-274272 has defined MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:57:55.631264  125655 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:49:c9", ip: ""} in network mk-functional-274272: {Iface:virbr1 ExpiryTime:2025-11-19 22:55:13 +0000 UTC Type:0 Mac:52:54:00:24:49:c9 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-274272 Clientid:01:52:54:00:24:49:c9}
	I1119 21:57:55.631302  125655 main.go:143] libmachine: domain functional-274272 has defined IP address 192.168.39.56 and MAC address 52:54:00:24:49:c9 in network mk-functional-274272
	I1119 21:57:55.631528  125655 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1119 21:57:55.636396  125655 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1119 21:57:55.636493  125655 kubeadm.go:884] updating cluster {Name:functional-274272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:functional-274272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 21:57:55.636629  125655 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 21:57:55.636691  125655 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 21:57:55.684104  125655 command_runner.go:130] > {
	I1119 21:57:55.684131  125655 command_runner.go:130] >   "images": [
	I1119 21:57:55.684137  125655 command_runner.go:130] >     {
	I1119 21:57:55.684148  125655 command_runner.go:130] >       "id": "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1119 21:57:55.684155  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.684165  125655 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1119 21:57:55.684170  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684176  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.684188  125655 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1119 21:57:55.684199  125655 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1119 21:57:55.684209  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684215  125655 command_runner.go:130] >       "size": "109379124",
	I1119 21:57:55.684222  125655 command_runner.go:130] >       "uid": null,
	I1119 21:57:55.684231  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.684259  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.684270  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.684275  125655 command_runner.go:130] >     },
	I1119 21:57:55.684280  125655 command_runner.go:130] >     {
	I1119 21:57:55.684290  125655 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1119 21:57:55.684299  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.684308  125655 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1119 21:57:55.684317  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684323  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.684344  125655 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1119 21:57:55.684360  125655 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1119 21:57:55.684366  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684377  125655 command_runner.go:130] >       "size": "31470524",
	I1119 21:57:55.684385  125655 command_runner.go:130] >       "uid": null,
	I1119 21:57:55.684392  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.684399  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.684407  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.684412  125655 command_runner.go:130] >     },
	I1119 21:57:55.684421  125655 command_runner.go:130] >     {
	I1119 21:57:55.684430  125655 command_runner.go:130] >       "id": "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1119 21:57:55.684439  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.684447  125655 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1119 21:57:55.684457  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684463  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.684478  125655 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1119 21:57:55.684492  125655 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1119 21:57:55.684501  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684507  125655 command_runner.go:130] >       "size": "76103547",
	I1119 21:57:55.684514  125655 command_runner.go:130] >       "uid": null,
	I1119 21:57:55.684521  125655 command_runner.go:130] >       "username": "nonroot",
	I1119 21:57:55.684530  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.684536  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.684544  125655 command_runner.go:130] >     },
	I1119 21:57:55.684551  125655 command_runner.go:130] >     {
	I1119 21:57:55.684561  125655 command_runner.go:130] >       "id": "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1119 21:57:55.684567  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.684578  125655 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1119 21:57:55.684584  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684591  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.684603  125655 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1119 21:57:55.684630  125655 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1119 21:57:55.684645  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684659  125655 command_runner.go:130] >       "size": "195976448",
	I1119 21:57:55.684669  125655 command_runner.go:130] >       "uid": {
	I1119 21:57:55.684675  125655 command_runner.go:130] >         "value": "0"
	I1119 21:57:55.684681  125655 command_runner.go:130] >       },
	I1119 21:57:55.684687  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.684693  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.684699  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.684704  125655 command_runner.go:130] >     },
	I1119 21:57:55.684719  125655 command_runner.go:130] >     {
	I1119 21:57:55.684731  125655 command_runner.go:130] >       "id": "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1119 21:57:55.684738  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.684745  125655 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1119 21:57:55.684753  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684759  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.684771  125655 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1119 21:57:55.684783  125655 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1119 21:57:55.684791  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684796  125655 command_runner.go:130] >       "size": "89046001",
	I1119 21:57:55.684802  125655 command_runner.go:130] >       "uid": {
	I1119 21:57:55.684808  125655 command_runner.go:130] >         "value": "0"
	I1119 21:57:55.684817  125655 command_runner.go:130] >       },
	I1119 21:57:55.684822  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.684830  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.684836  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.684844  125655 command_runner.go:130] >     },
	I1119 21:57:55.684849  125655 command_runner.go:130] >     {
	I1119 21:57:55.684860  125655 command_runner.go:130] >       "id": "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1119 21:57:55.684866  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.684898  125655 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1119 21:57:55.684908  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684914  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.684927  125655 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1119 21:57:55.684940  125655 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1119 21:57:55.684953  125655 command_runner.go:130] >       ],
	I1119 21:57:55.684962  125655 command_runner.go:130] >       "size": "76004181",
	I1119 21:57:55.684968  125655 command_runner.go:130] >       "uid": {
	I1119 21:57:55.684976  125655 command_runner.go:130] >         "value": "0"
	I1119 21:57:55.684981  125655 command_runner.go:130] >       },
	I1119 21:57:55.684990  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.684995  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.685004  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.685009  125655 command_runner.go:130] >     },
	I1119 21:57:55.685015  125655 command_runner.go:130] >     {
	I1119 21:57:55.685025  125655 command_runner.go:130] >       "id": "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1119 21:57:55.685034  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.685041  125655 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1119 21:57:55.685049  125655 command_runner.go:130] >       ],
	I1119 21:57:55.685055  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.685069  125655 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1119 21:57:55.685081  125655 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1119 21:57:55.685090  125655 command_runner.go:130] >       ],
	I1119 21:57:55.685095  125655 command_runner.go:130] >       "size": "73138073",
	I1119 21:57:55.685104  125655 command_runner.go:130] >       "uid": null,
	I1119 21:57:55.685110  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.685119  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.685125  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.685134  125655 command_runner.go:130] >     },
	I1119 21:57:55.685140  125655 command_runner.go:130] >     {
	I1119 21:57:55.685151  125655 command_runner.go:130] >       "id": "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1119 21:57:55.685158  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.685166  125655 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1119 21:57:55.685174  125655 command_runner.go:130] >       ],
	I1119 21:57:55.685181  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.685213  125655 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1119 21:57:55.685226  125655 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1119 21:57:55.685240  125655 command_runner.go:130] >       ],
	I1119 21:57:55.685255  125655 command_runner.go:130] >       "size": "53844823",
	I1119 21:57:55.685264  125655 command_runner.go:130] >       "uid": {
	I1119 21:57:55.685270  125655 command_runner.go:130] >         "value": "0"
	I1119 21:57:55.685278  125655 command_runner.go:130] >       },
	I1119 21:57:55.685285  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.685294  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.685299  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.685307  125655 command_runner.go:130] >     },
	I1119 21:57:55.685311  125655 command_runner.go:130] >     {
	I1119 21:57:55.685322  125655 command_runner.go:130] >       "id": "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1119 21:57:55.685327  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.685334  125655 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1119 21:57:55.685339  125655 command_runner.go:130] >       ],
	I1119 21:57:55.685346  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.685357  125655 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1119 21:57:55.685370  125655 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1119 21:57:55.685378  125655 command_runner.go:130] >       ],
	I1119 21:57:55.685383  125655 command_runner.go:130] >       "size": "742092",
	I1119 21:57:55.685390  125655 command_runner.go:130] >       "uid": {
	I1119 21:57:55.685396  125655 command_runner.go:130] >         "value": "65535"
	I1119 21:57:55.685403  125655 command_runner.go:130] >       },
	I1119 21:57:55.685408  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.685414  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.685419  125655 command_runner.go:130] >       "pinned": true
	I1119 21:57:55.685427  125655 command_runner.go:130] >     }
	I1119 21:57:55.685433  125655 command_runner.go:130] >   ]
	I1119 21:57:55.685437  125655 command_runner.go:130] > }
	I1119 21:57:55.686566  125655 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 21:57:55.686586  125655 crio.go:433] Images already preloaded, skipping extraction
	I1119 21:57:55.686648  125655 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 21:57:55.724518  125655 command_runner.go:130] > {
	I1119 21:57:55.724544  125655 command_runner.go:130] >   "images": [
	I1119 21:57:55.724550  125655 command_runner.go:130] >     {
	I1119 21:57:55.724562  125655 command_runner.go:130] >       "id": "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1119 21:57:55.724570  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.724578  125655 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1119 21:57:55.724582  125655 command_runner.go:130] >       ],
	I1119 21:57:55.724587  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.724597  125655 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1119 21:57:55.724607  125655 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1119 21:57:55.724613  125655 command_runner.go:130] >       ],
	I1119 21:57:55.724619  125655 command_runner.go:130] >       "size": "109379124",
	I1119 21:57:55.724626  125655 command_runner.go:130] >       "uid": null,
	I1119 21:57:55.724632  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.724643  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.724650  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.724656  125655 command_runner.go:130] >     },
	I1119 21:57:55.724662  125655 command_runner.go:130] >     {
	I1119 21:57:55.724672  125655 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1119 21:57:55.724681  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.724688  125655 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1119 21:57:55.724694  125655 command_runner.go:130] >       ],
	I1119 21:57:55.724714  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.724730  125655 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1119 21:57:55.724751  125655 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1119 21:57:55.724760  125655 command_runner.go:130] >       ],
	I1119 21:57:55.724772  125655 command_runner.go:130] >       "size": "31470524",
	I1119 21:57:55.724779  125655 command_runner.go:130] >       "uid": null,
	I1119 21:57:55.724789  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.724795  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.724802  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.724808  125655 command_runner.go:130] >     },
	I1119 21:57:55.724814  125655 command_runner.go:130] >     {
	I1119 21:57:55.724828  125655 command_runner.go:130] >       "id": "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1119 21:57:55.724838  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.724847  125655 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1119 21:57:55.724853  125655 command_runner.go:130] >       ],
	I1119 21:57:55.724860  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.724872  125655 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1119 21:57:55.724903  125655 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1119 21:57:55.724913  125655 command_runner.go:130] >       ],
	I1119 21:57:55.724923  125655 command_runner.go:130] >       "size": "76103547",
	I1119 21:57:55.724930  125655 command_runner.go:130] >       "uid": null,
	I1119 21:57:55.724940  125655 command_runner.go:130] >       "username": "nonroot",
	I1119 21:57:55.724945  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.724950  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.724955  125655 command_runner.go:130] >     },
	I1119 21:57:55.724959  125655 command_runner.go:130] >     {
	I1119 21:57:55.724967  125655 command_runner.go:130] >       "id": "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1119 21:57:55.724974  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.724982  125655 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1119 21:57:55.724989  125655 command_runner.go:130] >       ],
	I1119 21:57:55.724996  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.725010  125655 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1119 21:57:55.725033  125655 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1119 21:57:55.725048  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725055  125655 command_runner.go:130] >       "size": "195976448",
	I1119 21:57:55.725065  125655 command_runner.go:130] >       "uid": {
	I1119 21:57:55.725071  125655 command_runner.go:130] >         "value": "0"
	I1119 21:57:55.725077  125655 command_runner.go:130] >       },
	I1119 21:57:55.725083  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.725089  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.725097  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.725103  125655 command_runner.go:130] >     },
	I1119 21:57:55.725109  125655 command_runner.go:130] >     {
	I1119 21:57:55.725120  125655 command_runner.go:130] >       "id": "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1119 21:57:55.725127  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.725136  125655 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1119 21:57:55.725142  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725149  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.725161  125655 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1119 21:57:55.725180  125655 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1119 21:57:55.725186  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725203  125655 command_runner.go:130] >       "size": "89046001",
	I1119 21:57:55.725210  125655 command_runner.go:130] >       "uid": {
	I1119 21:57:55.725218  125655 command_runner.go:130] >         "value": "0"
	I1119 21:57:55.725231  125655 command_runner.go:130] >       },
	I1119 21:57:55.725241  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.725247  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.725256  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.725260  125655 command_runner.go:130] >     },
	I1119 21:57:55.725265  125655 command_runner.go:130] >     {
	I1119 21:57:55.725277  125655 command_runner.go:130] >       "id": "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1119 21:57:55.725284  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.725295  125655 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1119 21:57:55.725301  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725308  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.725324  125655 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1119 21:57:55.725348  125655 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1119 21:57:55.725357  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725364  125655 command_runner.go:130] >       "size": "76004181",
	I1119 21:57:55.725373  125655 command_runner.go:130] >       "uid": {
	I1119 21:57:55.725379  125655 command_runner.go:130] >         "value": "0"
	I1119 21:57:55.725385  125655 command_runner.go:130] >       },
	I1119 21:57:55.725389  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.725395  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.725400  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.725403  125655 command_runner.go:130] >     },
	I1119 21:57:55.725408  125655 command_runner.go:130] >     {
	I1119 21:57:55.725415  125655 command_runner.go:130] >       "id": "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1119 21:57:55.725421  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.725431  125655 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1119 21:57:55.725437  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725443  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.725453  125655 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1119 21:57:55.725463  125655 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1119 21:57:55.725470  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725477  125655 command_runner.go:130] >       "size": "73138073",
	I1119 21:57:55.725482  125655 command_runner.go:130] >       "uid": null,
	I1119 21:57:55.725489  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.725496  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.725503  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.725509  125655 command_runner.go:130] >     },
	I1119 21:57:55.725515  125655 command_runner.go:130] >     {
	I1119 21:57:55.725525  125655 command_runner.go:130] >       "id": "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1119 21:57:55.725531  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.725539  125655 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1119 21:57:55.725545  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725551  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.725603  125655 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1119 21:57:55.725620  125655 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1119 21:57:55.725634  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725643  125655 command_runner.go:130] >       "size": "53844823",
	I1119 21:57:55.725649  125655 command_runner.go:130] >       "uid": {
	I1119 21:57:55.725655  125655 command_runner.go:130] >         "value": "0"
	I1119 21:57:55.725661  125655 command_runner.go:130] >       },
	I1119 21:57:55.725667  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.725674  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.725680  125655 command_runner.go:130] >       "pinned": false
	I1119 21:57:55.725685  125655 command_runner.go:130] >     },
	I1119 21:57:55.725691  125655 command_runner.go:130] >     {
	I1119 21:57:55.725704  125655 command_runner.go:130] >       "id": "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1119 21:57:55.725711  125655 command_runner.go:130] >       "repoTags": [
	I1119 21:57:55.725721  125655 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1119 21:57:55.725727  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725733  125655 command_runner.go:130] >       "repoDigests": [
	I1119 21:57:55.725745  125655 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1119 21:57:55.725759  125655 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1119 21:57:55.725765  125655 command_runner.go:130] >       ],
	I1119 21:57:55.725774  125655 command_runner.go:130] >       "size": "742092",
	I1119 21:57:55.725781  125655 command_runner.go:130] >       "uid": {
	I1119 21:57:55.725787  125655 command_runner.go:130] >         "value": "65535"
	I1119 21:57:55.725795  125655 command_runner.go:130] >       },
	I1119 21:57:55.725818  125655 command_runner.go:130] >       "username": "",
	I1119 21:57:55.725827  125655 command_runner.go:130] >       "spec": null,
	I1119 21:57:55.725833  125655 command_runner.go:130] >       "pinned": true
	I1119 21:57:55.725838  125655 command_runner.go:130] >     }
	I1119 21:57:55.725844  125655 command_runner.go:130] >   ]
	I1119 21:57:55.725849  125655 command_runner.go:130] > }
	I1119 21:57:55.726172  125655 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 21:57:55.726196  125655 cache_images.go:86] Images are preloaded, skipping loading
	I1119 21:57:55.726209  125655 kubeadm.go:935] updating node { 192.168.39.56 8441 v1.34.1 crio true true} ...
	I1119 21:57:55.726334  125655 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-274272 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.56
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-274272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 21:57:55.726417  125655 ssh_runner.go:195] Run: crio config
	I1119 21:57:55.773985  125655 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1119 21:57:55.774020  125655 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1119 21:57:55.774032  125655 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1119 21:57:55.774049  125655 command_runner.go:130] > #
	I1119 21:57:55.774057  125655 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1119 21:57:55.774064  125655 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1119 21:57:55.774073  125655 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1119 21:57:55.774083  125655 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1119 21:57:55.774088  125655 command_runner.go:130] > # reload'.
	I1119 21:57:55.774100  125655 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1119 21:57:55.774114  125655 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1119 21:57:55.774123  125655 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1119 21:57:55.774134  125655 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1119 21:57:55.774140  125655 command_runner.go:130] > [crio]
	I1119 21:57:55.774153  125655 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1119 21:57:55.774158  125655 command_runner.go:130] > # containers images, in this directory.
	I1119 21:57:55.774167  125655 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1119 21:57:55.774185  125655 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1119 21:57:55.774195  125655 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1119 21:57:55.774215  125655 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1119 21:57:55.774225  125655 command_runner.go:130] > # imagestore = ""
	I1119 21:57:55.774235  125655 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1119 21:57:55.774244  125655 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1119 21:57:55.774249  125655 command_runner.go:130] > # storage_driver = "overlay"
	I1119 21:57:55.774256  125655 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1119 21:57:55.774266  125655 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1119 21:57:55.774272  125655 command_runner.go:130] > storage_option = [
	I1119 21:57:55.774283  125655 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1119 21:57:55.774288  125655 command_runner.go:130] > ]
	I1119 21:57:55.774298  125655 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1119 21:57:55.774311  125655 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1119 21:57:55.774319  125655 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1119 21:57:55.774328  125655 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1119 21:57:55.774340  125655 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1119 21:57:55.774346  125655 command_runner.go:130] > # always happen on a node reboot
	I1119 21:57:55.774354  125655 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1119 21:57:55.774377  125655 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1119 21:57:55.774390  125655 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1119 21:57:55.774398  125655 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1119 21:57:55.774409  125655 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1119 21:57:55.774421  125655 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1119 21:57:55.774436  125655 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1119 21:57:55.774442  125655 command_runner.go:130] > # internal_wipe = true
	I1119 21:57:55.774455  125655 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1119 21:57:55.774462  125655 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1119 21:57:55.774469  125655 command_runner.go:130] > # internal_repair = false
	I1119 21:57:55.774476  125655 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1119 21:57:55.774486  125655 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1119 21:57:55.774494  125655 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1119 21:57:55.774508  125655 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1119 21:57:55.774516  125655 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1119 21:57:55.774529  125655 command_runner.go:130] > [crio.api]
	I1119 21:57:55.774537  125655 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1119 21:57:55.774545  125655 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1119 21:57:55.774555  125655 command_runner.go:130] > # IP address on which the stream server will listen.
	I1119 21:57:55.774560  125655 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1119 21:57:55.774574  125655 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1119 21:57:55.774583  125655 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1119 21:57:55.774589  125655 command_runner.go:130] > # stream_port = "0"
	I1119 21:57:55.774598  125655 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1119 21:57:55.774609  125655 command_runner.go:130] > # stream_enable_tls = false
	I1119 21:57:55.774617  125655 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1119 21:57:55.774621  125655 command_runner.go:130] > # stream_idle_timeout = ""
	I1119 21:57:55.774630  125655 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1119 21:57:55.774635  125655 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1119 21:57:55.774639  125655 command_runner.go:130] > # minutes.
	I1119 21:57:55.774643  125655 command_runner.go:130] > # stream_tls_cert = ""
	I1119 21:57:55.774648  125655 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1119 21:57:55.774656  125655 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1119 21:57:55.774665  125655 command_runner.go:130] > # stream_tls_key = ""
	I1119 21:57:55.774673  125655 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1119 21:57:55.774680  125655 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1119 21:57:55.774706  125655 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1119 21:57:55.774716  125655 command_runner.go:130] > # stream_tls_ca = ""
	I1119 21:57:55.774726  125655 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1119 21:57:55.774734  125655 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1119 21:57:55.774745  125655 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1119 21:57:55.774755  125655 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1119 21:57:55.774765  125655 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1119 21:57:55.774777  125655 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1119 21:57:55.774782  125655 command_runner.go:130] > [crio.runtime]
	I1119 21:57:55.774791  125655 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1119 21:57:55.774804  125655 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1119 21:57:55.774810  125655 command_runner.go:130] > # "nofile=1024:2048"
	I1119 21:57:55.774827  125655 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1119 21:57:55.774834  125655 command_runner.go:130] > # default_ulimits = [
	I1119 21:57:55.774841  125655 command_runner.go:130] > # ]
	I1119 21:57:55.774850  125655 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1119 21:57:55.774857  125655 command_runner.go:130] > # no_pivot = false
	I1119 21:57:55.774866  125655 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1119 21:57:55.774891  125655 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1119 21:57:55.774899  125655 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1119 21:57:55.774910  125655 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1119 21:57:55.774918  125655 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1119 21:57:55.774932  125655 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1119 21:57:55.774939  125655 command_runner.go:130] > conmon = "/usr/bin/conmon"
	I1119 21:57:55.774947  125655 command_runner.go:130] > # Cgroup setting for conmon
	I1119 21:57:55.774955  125655 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1119 21:57:55.774964  125655 command_runner.go:130] > conmon_cgroup = "pod"
	I1119 21:57:55.774974  125655 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1119 21:57:55.774985  125655 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1119 21:57:55.774996  125655 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1119 21:57:55.775012  125655 command_runner.go:130] > conmon_env = [
	I1119 21:57:55.775026  125655 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1119 21:57:55.775031  125655 command_runner.go:130] > ]
	I1119 21:57:55.775043  125655 command_runner.go:130] > # Additional environment variables to set for all the
	I1119 21:57:55.775051  125655 command_runner.go:130] > # containers. These are overridden if set in the
	I1119 21:57:55.775061  125655 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1119 21:57:55.775067  125655 command_runner.go:130] > # default_env = [
	I1119 21:57:55.775073  125655 command_runner.go:130] > # ]
	I1119 21:57:55.775081  125655 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1119 21:57:55.775095  125655 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1119 21:57:55.775101  125655 command_runner.go:130] > # selinux = false
	I1119 21:57:55.775112  125655 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1119 21:57:55.775121  125655 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1119 21:57:55.775133  125655 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1119 21:57:55.775139  125655 command_runner.go:130] > # seccomp_profile = ""
	I1119 21:57:55.775149  125655 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1119 21:57:55.775157  125655 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1119 21:57:55.775167  125655 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1119 21:57:55.775178  125655 command_runner.go:130] > # which might increase security.
	I1119 21:57:55.775185  125655 command_runner.go:130] > # This option is currently deprecated,
	I1119 21:57:55.775195  125655 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1119 21:57:55.775209  125655 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1119 21:57:55.775222  125655 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1119 21:57:55.775232  125655 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1119 21:57:55.775246  125655 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1119 21:57:55.775259  125655 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1119 21:57:55.775271  125655 command_runner.go:130] > # This option supports live configuration reload.
	I1119 21:57:55.775278  125655 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1119 21:57:55.775287  125655 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1119 21:57:55.775299  125655 command_runner.go:130] > # the cgroup blockio controller.
	I1119 21:57:55.775305  125655 command_runner.go:130] > # blockio_config_file = ""
	I1119 21:57:55.775316  125655 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1119 21:57:55.775327  125655 command_runner.go:130] > # blockio parameters.
	I1119 21:57:55.775346  125655 command_runner.go:130] > # blockio_reload = false
	I1119 21:57:55.775357  125655 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1119 21:57:55.775363  125655 command_runner.go:130] > # irqbalance daemon.
	I1119 21:57:55.775379  125655 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1119 21:57:55.775387  125655 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1119 21:57:55.775397  125655 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1119 21:57:55.775412  125655 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1119 21:57:55.775428  125655 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1119 21:57:55.775441  125655 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1119 21:57:55.775450  125655 command_runner.go:130] > # This option supports live configuration reload.
	I1119 21:57:55.775459  125655 command_runner.go:130] > # rdt_config_file = ""
	I1119 21:57:55.775465  125655 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1119 21:57:55.775470  125655 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1119 21:57:55.775549  125655 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1119 21:57:55.775561  125655 command_runner.go:130] > # separate_pull_cgroup = ""
	I1119 21:57:55.775567  125655 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1119 21:57:55.775573  125655 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1119 21:57:55.775576  125655 command_runner.go:130] > # will be added.
	I1119 21:57:55.775579  125655 command_runner.go:130] > # default_capabilities = [
	I1119 21:57:55.775584  125655 command_runner.go:130] > # 	"CHOWN",
	I1119 21:57:55.775591  125655 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1119 21:57:55.775604  125655 command_runner.go:130] > # 	"FSETID",
	I1119 21:57:55.775610  125655 command_runner.go:130] > # 	"FOWNER",
	I1119 21:57:55.775616  125655 command_runner.go:130] > # 	"SETGID",
	I1119 21:57:55.775623  125655 command_runner.go:130] > # 	"SETUID",
	I1119 21:57:55.775628  125655 command_runner.go:130] > # 	"SETPCAP",
	I1119 21:57:55.775634  125655 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1119 21:57:55.775641  125655 command_runner.go:130] > # 	"KILL",
	I1119 21:57:55.775646  125655 command_runner.go:130] > # ]
	I1119 21:57:55.775659  125655 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1119 21:57:55.775666  125655 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1119 21:57:55.775672  125655 command_runner.go:130] > # add_inheritable_capabilities = false
	I1119 21:57:55.775685  125655 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1119 21:57:55.775705  125655 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1119 21:57:55.775716  125655 command_runner.go:130] > default_sysctls = [
	I1119 21:57:55.775723  125655 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1119 21:57:55.775728  125655 command_runner.go:130] > ]
	I1119 21:57:55.775738  125655 command_runner.go:130] > # List of devices on the host that a
	I1119 21:57:55.775747  125655 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1119 21:57:55.775754  125655 command_runner.go:130] > # allowed_devices = [
	I1119 21:57:55.775759  125655 command_runner.go:130] > # 	"/dev/fuse",
	I1119 21:57:55.775766  125655 command_runner.go:130] > # ]
	I1119 21:57:55.775772  125655 command_runner.go:130] > # List of additional devices. specified as
	I1119 21:57:55.775779  125655 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1119 21:57:55.775791  125655 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1119 21:57:55.775801  125655 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1119 21:57:55.775807  125655 command_runner.go:130] > # additional_devices = [
	I1119 21:57:55.775813  125655 command_runner.go:130] > # ]
	I1119 21:57:55.775824  125655 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1119 21:57:55.775830  125655 command_runner.go:130] > # cdi_spec_dirs = [
	I1119 21:57:55.775836  125655 command_runner.go:130] > # 	"/etc/cdi",
	I1119 21:57:55.775845  125655 command_runner.go:130] > # 	"/var/run/cdi",
	I1119 21:57:55.775850  125655 command_runner.go:130] > # ]
	I1119 21:57:55.775860  125655 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1119 21:57:55.775871  125655 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1119 21:57:55.775894  125655 command_runner.go:130] > # Defaults to false.
	I1119 21:57:55.775901  125655 command_runner.go:130] > # device_ownership_from_security_context = false
	I1119 21:57:55.775919  125655 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1119 21:57:55.775932  125655 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1119 21:57:55.775938  125655 command_runner.go:130] > # hooks_dir = [
	I1119 21:57:55.775949  125655 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1119 21:57:55.775954  125655 command_runner.go:130] > # ]
	I1119 21:57:55.775967  125655 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1119 21:57:55.775976  125655 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1119 21:57:55.775986  125655 command_runner.go:130] > # its default mounts from the following two files:
	I1119 21:57:55.775990  125655 command_runner.go:130] > #
	I1119 21:57:55.776006  125655 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1119 21:57:55.776015  125655 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1119 21:57:55.776024  125655 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1119 21:57:55.776032  125655 command_runner.go:130] > #
	I1119 21:57:55.776042  125655 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1119 21:57:55.776054  125655 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1119 21:57:55.776065  125655 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1119 21:57:55.776077  125655 command_runner.go:130] > #      only add mounts it finds in this file.
	I1119 21:57:55.776082  125655 command_runner.go:130] > #
	I1119 21:57:55.776089  125655 command_runner.go:130] > # default_mounts_file = ""
	I1119 21:57:55.776099  125655 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1119 21:57:55.776105  125655 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1119 21:57:55.776113  125655 command_runner.go:130] > pids_limit = 1024
	I1119 21:57:55.776123  125655 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1119 21:57:55.776136  125655 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1119 21:57:55.776145  125655 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1119 21:57:55.776161  125655 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1119 21:57:55.776171  125655 command_runner.go:130] > # log_size_max = -1
	I1119 21:57:55.776181  125655 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1119 21:57:55.776190  125655 command_runner.go:130] > # log_to_journald = false
	I1119 21:57:55.776199  125655 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1119 21:57:55.776213  125655 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1119 21:57:55.776219  125655 command_runner.go:130] > # Path to directory for container attach sockets.
	I1119 21:57:55.776229  125655 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1119 21:57:55.776238  125655 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1119 21:57:55.776248  125655 command_runner.go:130] > # bind_mount_prefix = ""
	I1119 21:57:55.776256  125655 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1119 21:57:55.776266  125655 command_runner.go:130] > # read_only = false
	I1119 21:57:55.776275  125655 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1119 21:57:55.776287  125655 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1119 21:57:55.776293  125655 command_runner.go:130] > # live configuration reload.
	I1119 21:57:55.776299  125655 command_runner.go:130] > # log_level = "info"
	I1119 21:57:55.776309  125655 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1119 21:57:55.776330  125655 command_runner.go:130] > # This option supports live configuration reload.
	I1119 21:57:55.776339  125655 command_runner.go:130] > # log_filter = ""
	I1119 21:57:55.776349  125655 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1119 21:57:55.776364  125655 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1119 21:57:55.776372  125655 command_runner.go:130] > # separated by comma.
	I1119 21:57:55.776384  125655 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1119 21:57:55.776394  125655 command_runner.go:130] > # uid_mappings = ""
	I1119 21:57:55.776403  125655 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1119 21:57:55.776415  125655 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1119 21:57:55.776423  125655 command_runner.go:130] > # separated by comma.
	I1119 21:57:55.776433  125655 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1119 21:57:55.776443  125655 command_runner.go:130] > # gid_mappings = ""
	I1119 21:57:55.776452  125655 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1119 21:57:55.776465  125655 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1119 21:57:55.776478  125655 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1119 21:57:55.776490  125655 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1119 21:57:55.776498  125655 command_runner.go:130] > # minimum_mappable_uid = -1
	I1119 21:57:55.776507  125655 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1119 21:57:55.776520  125655 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1119 21:57:55.776530  125655 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1119 21:57:55.776540  125655 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1119 21:57:55.776548  125655 command_runner.go:130] > # minimum_mappable_gid = -1
	I1119 21:57:55.776557  125655 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1119 21:57:55.776569  125655 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1119 21:57:55.776587  125655 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1119 21:57:55.776597  125655 command_runner.go:130] > # ctr_stop_timeout = 30
	I1119 21:57:55.776607  125655 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1119 21:57:55.776619  125655 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1119 21:57:55.776626  125655 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1119 21:57:55.776637  125655 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1119 21:57:55.776642  125655 command_runner.go:130] > drop_infra_ctr = false
	I1119 21:57:55.776649  125655 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1119 21:57:55.776656  125655 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1119 21:57:55.776678  125655 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1119 21:57:55.776688  125655 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1119 21:57:55.776700  125655 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1119 21:57:55.776712  125655 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1119 21:57:55.776722  125655 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1119 21:57:55.776733  125655 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1119 21:57:55.776739  125655 command_runner.go:130] > # shared_cpuset = ""
	I1119 21:57:55.776751  125655 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1119 21:57:55.776759  125655 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1119 21:57:55.776765  125655 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1119 21:57:55.776775  125655 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1119 21:57:55.776785  125655 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1119 21:57:55.776794  125655 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1119 21:57:55.776810  125655 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1119 21:57:55.776820  125655 command_runner.go:130] > # enable_criu_support = false
	I1119 21:57:55.776828  125655 command_runner.go:130] > # Enable/disable the generation of the container,
	I1119 21:57:55.776840  125655 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1119 21:57:55.776847  125655 command_runner.go:130] > # enable_pod_events = false
	I1119 21:57:55.776856  125655 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1119 21:57:55.776862  125655 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1119 21:57:55.776870  125655 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1119 21:57:55.776886  125655 command_runner.go:130] > # default_runtime = "runc"
	I1119 21:57:55.776895  125655 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1119 21:57:55.776911  125655 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1119 21:57:55.776924  125655 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1119 21:57:55.776935  125655 command_runner.go:130] > # creation as a file is not desired either.
	I1119 21:57:55.776947  125655 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1119 21:57:55.776959  125655 command_runner.go:130] > # the hostname is being managed dynamically.
	I1119 21:57:55.776967  125655 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1119 21:57:55.776970  125655 command_runner.go:130] > # ]
	I1119 21:57:55.776979  125655 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1119 21:57:55.776993  125655 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1119 21:57:55.777006  125655 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1119 21:57:55.777024  125655 command_runner.go:130] > # Each entry in the table should follow the format:
	I1119 21:57:55.777032  125655 command_runner.go:130] > #
	I1119 21:57:55.777040  125655 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1119 21:57:55.777050  125655 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1119 21:57:55.777057  125655 command_runner.go:130] > # runtime_type = "oci"
	I1119 21:57:55.777119  125655 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1119 21:57:55.777131  125655 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1119 21:57:55.777138  125655 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1119 21:57:55.777145  125655 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1119 21:57:55.777152  125655 command_runner.go:130] > # monitor_env = []
	I1119 21:57:55.777163  125655 command_runner.go:130] > # privileged_without_host_devices = false
	I1119 21:57:55.777172  125655 command_runner.go:130] > # allowed_annotations = []
	I1119 21:57:55.777208  125655 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1119 21:57:55.777215  125655 command_runner.go:130] > # Where:
	I1119 21:57:55.777223  125655 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1119 21:57:55.777236  125655 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1119 21:57:55.777247  125655 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1119 21:57:55.777258  125655 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1119 21:57:55.777268  125655 command_runner.go:130] > #   in $PATH.
	I1119 21:57:55.777278  125655 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1119 21:57:55.777288  125655 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1119 21:57:55.777297  125655 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1119 21:57:55.777303  125655 command_runner.go:130] > #   state.
	I1119 21:57:55.777311  125655 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1119 21:57:55.777325  125655 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1119 21:57:55.777335  125655 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1119 21:57:55.777350  125655 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1119 21:57:55.777362  125655 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1119 21:57:55.777373  125655 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1119 21:57:55.777381  125655 command_runner.go:130] > #   The currently recognized values are:
	I1119 21:57:55.777388  125655 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1119 21:57:55.777402  125655 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1119 21:57:55.777414  125655 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1119 21:57:55.777431  125655 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1119 21:57:55.777446  125655 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1119 21:57:55.777459  125655 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1119 21:57:55.777470  125655 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1119 21:57:55.777478  125655 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1119 21:57:55.777484  125655 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1119 21:57:55.777499  125655 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1119 21:57:55.777510  125655 command_runner.go:130] > #   deprecated option "conmon".
	I1119 21:57:55.777521  125655 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1119 21:57:55.777532  125655 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1119 21:57:55.777543  125655 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1119 21:57:55.777553  125655 command_runner.go:130] > #   should be moved to the container's cgroup
	I1119 21:57:55.777567  125655 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1119 21:57:55.777578  125655 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1119 21:57:55.777586  125655 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1119 21:57:55.777593  125655 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1119 21:57:55.777598  125655 command_runner.go:130] > #
	I1119 21:57:55.777606  125655 command_runner.go:130] > # Using the seccomp notifier feature:
	I1119 21:57:55.777612  125655 command_runner.go:130] > #
	I1119 21:57:55.777628  125655 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1119 21:57:55.777638  125655 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1119 21:57:55.777646  125655 command_runner.go:130] > #
	I1119 21:57:55.777655  125655 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1119 21:57:55.777667  125655 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1119 21:57:55.777672  125655 command_runner.go:130] > #
	I1119 21:57:55.777682  125655 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1119 21:57:55.777688  125655 command_runner.go:130] > # feature.
	I1119 21:57:55.777693  125655 command_runner.go:130] > #
	I1119 21:57:55.777701  125655 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1119 21:57:55.777709  125655 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1119 21:57:55.777719  125655 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1119 21:57:55.777728  125655 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1119 21:57:55.777741  125655 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1119 21:57:55.777752  125655 command_runner.go:130] > #
	I1119 21:57:55.777764  125655 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1119 21:57:55.777774  125655 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1119 21:57:55.777781  125655 command_runner.go:130] > #
	I1119 21:57:55.777788  125655 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1119 21:57:55.777794  125655 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1119 21:57:55.777799  125655 command_runner.go:130] > #
	I1119 21:57:55.777804  125655 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1119 21:57:55.777810  125655 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1119 21:57:55.777814  125655 command_runner.go:130] > # limitation.
	I1119 21:57:55.777820  125655 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1119 21:57:55.777824  125655 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1119 21:57:55.777829  125655 command_runner.go:130] > runtime_type = "oci"
	I1119 21:57:55.777835  125655 command_runner.go:130] > runtime_root = "/run/runc"
	I1119 21:57:55.777839  125655 command_runner.go:130] > runtime_config_path = ""
	I1119 21:57:55.777843  125655 command_runner.go:130] > monitor_path = "/usr/bin/conmon"
	I1119 21:57:55.777847  125655 command_runner.go:130] > monitor_cgroup = "pod"
	I1119 21:57:55.777853  125655 command_runner.go:130] > monitor_exec_cgroup = ""
	I1119 21:57:55.777857  125655 command_runner.go:130] > monitor_env = [
	I1119 21:57:55.777862  125655 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1119 21:57:55.777866  125655 command_runner.go:130] > ]
	I1119 21:57:55.777870  125655 command_runner.go:130] > privileged_without_host_devices = false
	I1119 21:57:55.777885  125655 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1119 21:57:55.777890  125655 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1119 21:57:55.777898  125655 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1119 21:57:55.777905  125655 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1119 21:57:55.777915  125655 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1119 21:57:55.777923  125655 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1119 21:57:55.777936  125655 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1119 21:57:55.777946  125655 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1119 21:57:55.777952  125655 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1119 21:57:55.777959  125655 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1119 21:57:55.777963  125655 command_runner.go:130] > # Example:
	I1119 21:57:55.777976  125655 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1119 21:57:55.777983  125655 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1119 21:57:55.777987  125655 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1119 21:57:55.777992  125655 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1119 21:57:55.777995  125655 command_runner.go:130] > # cpuset = 0
	I1119 21:57:55.777999  125655 command_runner.go:130] > # cpushares = "0-1"
	I1119 21:57:55.778002  125655 command_runner.go:130] > # Where:
	I1119 21:57:55.778006  125655 command_runner.go:130] > # The workload name is workload-type.
	I1119 21:57:55.778015  125655 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1119 21:57:55.778020  125655 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1119 21:57:55.778025  125655 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1119 21:57:55.778037  125655 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1119 21:57:55.778043  125655 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1119 21:57:55.778048  125655 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1119 21:57:55.778053  125655 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1119 21:57:55.778058  125655 command_runner.go:130] > # Default value is set to true
	I1119 21:57:55.778062  125655 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1119 21:57:55.778067  125655 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1119 21:57:55.778071  125655 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1119 21:57:55.778075  125655 command_runner.go:130] > # Default value is set to 'false'
	I1119 21:57:55.778079  125655 command_runner.go:130] > # disable_hostport_mapping = false
	I1119 21:57:55.778085  125655 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1119 21:57:55.778090  125655 command_runner.go:130] > #
	I1119 21:57:55.778095  125655 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1119 21:57:55.778101  125655 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1119 21:57:55.778106  125655 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1119 21:57:55.778115  125655 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1119 21:57:55.778120  125655 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1119 21:57:55.778125  125655 command_runner.go:130] > [crio.image]
	I1119 21:57:55.778131  125655 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1119 21:57:55.778135  125655 command_runner.go:130] > # default_transport = "docker://"
	I1119 21:57:55.778140  125655 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1119 21:57:55.778146  125655 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1119 21:57:55.778154  125655 command_runner.go:130] > # global_auth_file = ""
	I1119 21:57:55.778162  125655 command_runner.go:130] > # The image used to instantiate infra containers.
	I1119 21:57:55.778166  125655 command_runner.go:130] > # This option supports live configuration reload.
	I1119 21:57:55.778171  125655 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10.1"
	I1119 21:57:55.778176  125655 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1119 21:57:55.778184  125655 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1119 21:57:55.778189  125655 command_runner.go:130] > # This option supports live configuration reload.
	I1119 21:57:55.778193  125655 command_runner.go:130] > # pause_image_auth_file = ""
	I1119 21:57:55.778201  125655 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1119 21:57:55.778210  125655 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1119 21:57:55.778216  125655 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1119 21:57:55.778221  125655 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1119 21:57:55.778226  125655 command_runner.go:130] > # pause_command = "/pause"
	I1119 21:57:55.778232  125655 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1119 21:57:55.778237  125655 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1119 21:57:55.778244  125655 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1119 21:57:55.778249  125655 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1119 21:57:55.778256  125655 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1119 21:57:55.778264  125655 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1119 21:57:55.778268  125655 command_runner.go:130] > # pinned_images = [
	I1119 21:57:55.778271  125655 command_runner.go:130] > # ]
	I1119 21:57:55.778277  125655 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1119 21:57:55.778283  125655 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1119 21:57:55.778288  125655 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1119 21:57:55.778296  125655 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1119 21:57:55.778301  125655 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1119 21:57:55.778305  125655 command_runner.go:130] > # signature_policy = ""
	I1119 21:57:55.778310  125655 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1119 21:57:55.778316  125655 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1119 21:57:55.778323  125655 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1119 21:57:55.778328  125655 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1119 21:57:55.778336  125655 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1119 21:57:55.778341  125655 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1119 21:57:55.778354  125655 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1119 21:57:55.778360  125655 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1119 21:57:55.778364  125655 command_runner.go:130] > # changing them here.
	I1119 21:57:55.778368  125655 command_runner.go:130] > # insecure_registries = [
	I1119 21:57:55.778371  125655 command_runner.go:130] > # ]
	I1119 21:57:55.778377  125655 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1119 21:57:55.778382  125655 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1119 21:57:55.778386  125655 command_runner.go:130] > # image_volumes = "mkdir"
	I1119 21:57:55.778390  125655 command_runner.go:130] > # Temporary directory to use for storing big files
	I1119 21:57:55.778396  125655 command_runner.go:130] > # big_files_temporary_dir = ""
	I1119 21:57:55.778401  125655 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1119 21:57:55.778405  125655 command_runner.go:130] > # CNI plugins.
	I1119 21:57:55.778411  125655 command_runner.go:130] > [crio.network]
	I1119 21:57:55.778416  125655 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1119 21:57:55.778421  125655 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1119 21:57:55.778427  125655 command_runner.go:130] > # cni_default_network = ""
	I1119 21:57:55.778432  125655 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1119 21:57:55.778436  125655 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1119 21:57:55.778441  125655 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1119 21:57:55.778445  125655 command_runner.go:130] > # plugin_dirs = [
	I1119 21:57:55.778448  125655 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1119 21:57:55.778450  125655 command_runner.go:130] > # ]
	I1119 21:57:55.778461  125655 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1119 21:57:55.778467  125655 command_runner.go:130] > [crio.metrics]
	I1119 21:57:55.778471  125655 command_runner.go:130] > # Globally enable or disable metrics support.
	I1119 21:57:55.778475  125655 command_runner.go:130] > enable_metrics = true
	I1119 21:57:55.778479  125655 command_runner.go:130] > # Specify enabled metrics collectors.
	I1119 21:57:55.778485  125655 command_runner.go:130] > # Per default all metrics are enabled.
	I1119 21:57:55.778491  125655 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1119 21:57:55.778496  125655 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1119 21:57:55.778502  125655 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1119 21:57:55.778506  125655 command_runner.go:130] > # metrics_collectors = [
	I1119 21:57:55.778510  125655 command_runner.go:130] > # 	"operations",
	I1119 21:57:55.778519  125655 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1119 21:57:55.778526  125655 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1119 21:57:55.778529  125655 command_runner.go:130] > # 	"operations_errors",
	I1119 21:57:55.778533  125655 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1119 21:57:55.778537  125655 command_runner.go:130] > # 	"image_pulls_by_name",
	I1119 21:57:55.778541  125655 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1119 21:57:55.778545  125655 command_runner.go:130] > # 	"image_pulls_failures",
	I1119 21:57:55.778551  125655 command_runner.go:130] > # 	"image_pulls_successes",
	I1119 21:57:55.778556  125655 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1119 21:57:55.778559  125655 command_runner.go:130] > # 	"image_layer_reuse",
	I1119 21:57:55.778563  125655 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1119 21:57:55.778567  125655 command_runner.go:130] > # 	"containers_oom_total",
	I1119 21:57:55.778570  125655 command_runner.go:130] > # 	"containers_oom",
	I1119 21:57:55.778574  125655 command_runner.go:130] > # 	"processes_defunct",
	I1119 21:57:55.778578  125655 command_runner.go:130] > # 	"operations_total",
	I1119 21:57:55.778582  125655 command_runner.go:130] > # 	"operations_latency_seconds",
	I1119 21:57:55.778588  125655 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1119 21:57:55.778592  125655 command_runner.go:130] > # 	"operations_errors_total",
	I1119 21:57:55.778596  125655 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1119 21:57:55.778600  125655 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1119 21:57:55.778604  125655 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1119 21:57:55.778611  125655 command_runner.go:130] > # 	"image_pulls_success_total",
	I1119 21:57:55.778614  125655 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1119 21:57:55.778618  125655 command_runner.go:130] > # 	"containers_oom_count_total",
	I1119 21:57:55.778625  125655 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1119 21:57:55.778629  125655 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1119 21:57:55.778632  125655 command_runner.go:130] > # ]
	I1119 21:57:55.778637  125655 command_runner.go:130] > # The port on which the metrics server will listen.
	I1119 21:57:55.778641  125655 command_runner.go:130] > # metrics_port = 9090
	I1119 21:57:55.778645  125655 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1119 21:57:55.778656  125655 command_runner.go:130] > # metrics_socket = ""
	I1119 21:57:55.778660  125655 command_runner.go:130] > # The certificate for the secure metrics server.
	I1119 21:57:55.778665  125655 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1119 21:57:55.778678  125655 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1119 21:57:55.778683  125655 command_runner.go:130] > # certificate on any modification event.
	I1119 21:57:55.778689  125655 command_runner.go:130] > # metrics_cert = ""
	I1119 21:57:55.778694  125655 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1119 21:57:55.778699  125655 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1119 21:57:55.778703  125655 command_runner.go:130] > # metrics_key = ""
	I1119 21:57:55.778708  125655 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1119 21:57:55.778714  125655 command_runner.go:130] > [crio.tracing]
	I1119 21:57:55.778719  125655 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1119 21:57:55.778722  125655 command_runner.go:130] > # enable_tracing = false
	I1119 21:57:55.778729  125655 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1119 21:57:55.778733  125655 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1119 21:57:55.778739  125655 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1119 21:57:55.778746  125655 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1119 21:57:55.778750  125655 command_runner.go:130] > # CRI-O NRI configuration.
	I1119 21:57:55.778753  125655 command_runner.go:130] > [crio.nri]
	I1119 21:57:55.778757  125655 command_runner.go:130] > # Globally enable or disable NRI.
	I1119 21:57:55.778761  125655 command_runner.go:130] > # enable_nri = false
	I1119 21:57:55.778766  125655 command_runner.go:130] > # NRI socket to listen on.
	I1119 21:57:55.778772  125655 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1119 21:57:55.778776  125655 command_runner.go:130] > # NRI plugin directory to use.
	I1119 21:57:55.778783  125655 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1119 21:57:55.778787  125655 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1119 21:57:55.778791  125655 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1119 21:57:55.778796  125655 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1119 21:57:55.778801  125655 command_runner.go:130] > # nri_disable_connections = false
	I1119 21:57:55.778805  125655 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1119 21:57:55.778809  125655 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1119 21:57:55.778814  125655 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1119 21:57:55.778818  125655 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1119 21:57:55.778823  125655 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1119 21:57:55.778826  125655 command_runner.go:130] > [crio.stats]
	I1119 21:57:55.778831  125655 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1119 21:57:55.778844  125655 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1119 21:57:55.778848  125655 command_runner.go:130] > # stats_collection_period = 0
	I1119 21:57:55.778894  125655 command_runner.go:130] ! time="2025-11-19 21:57:55.755704188Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1119 21:57:55.778909  125655 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1119 21:57:55.779016  125655 cni.go:84] Creating CNI manager for ""
	I1119 21:57:55.779032  125655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1119 21:57:55.779052  125655 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 21:57:55.779081  125655 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.56 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-274272 NodeName:functional-274272 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.56"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.56 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 21:57:55.779230  125655 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.56
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-274272"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.56"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.56"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 21:57:55.779314  125655 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 21:57:55.791865  125655 command_runner.go:130] > kubeadm
	I1119 21:57:55.791898  125655 command_runner.go:130] > kubectl
	I1119 21:57:55.791902  125655 command_runner.go:130] > kubelet
	I1119 21:57:55.792339  125655 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 21:57:55.792402  125655 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 21:57:55.804500  125655 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1119 21:57:55.831211  125655 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 21:57:55.857336  125655 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1119 21:57:55.883585  125655 ssh_runner.go:195] Run: grep 192.168.39.56	control-plane.minikube.internal$ /etc/hosts
	I1119 21:57:55.888240  125655 command_runner.go:130] > 192.168.39.56	control-plane.minikube.internal
	I1119 21:57:55.888422  125655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 21:57:56.081809  125655 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 21:57:56.102791  125655 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272 for IP: 192.168.39.56
	I1119 21:57:56.102821  125655 certs.go:195] generating shared ca certs ...
	I1119 21:57:56.102844  125655 certs.go:227] acquiring lock for ca certs: {Name:mk7fd1f1adfef6333505c39c1a982465562c820e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:57:56.103063  125655 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key
	I1119 21:57:56.103136  125655 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key
	I1119 21:57:56.103152  125655 certs.go:257] generating profile certs ...
	I1119 21:57:56.103293  125655 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/client.key
	I1119 21:57:56.103368  125655 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/apiserver.key.ff709108
	I1119 21:57:56.103443  125655 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/proxy-client.key
	I1119 21:57:56.103459  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1119 21:57:56.103484  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1119 21:57:56.103511  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1119 21:57:56.103529  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1119 21:57:56.103543  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1119 21:57:56.103561  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1119 21:57:56.103579  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1119 21:57:56.103596  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1119 21:57:56.103672  125655 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem (1338 bytes)
	W1119 21:57:56.103719  125655 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369_empty.pem, impossibly tiny 0 bytes
	I1119 21:57:56.103738  125655 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 21:57:56.103773  125655 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem (1078 bytes)
	I1119 21:57:56.103801  125655 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem (1123 bytes)
	I1119 21:57:56.103827  125655 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem (1675 bytes)
	I1119 21:57:56.103904  125655 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 21:57:56.103946  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /usr/share/ca-certificates/1213692.pem
	I1119 21:57:56.103967  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:57:56.103983  125655 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem -> /usr/share/ca-certificates/121369.pem
	I1119 21:57:56.104844  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 21:57:56.137315  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 21:57:56.170238  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 21:57:56.201511  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 21:57:56.232500  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 21:57:56.263196  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 21:57:56.293733  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 21:57:56.325433  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/functional-274272/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 21:57:56.358184  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /usr/share/ca-certificates/1213692.pem (1708 bytes)
	I1119 21:57:56.390372  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 21:57:56.421898  125655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem --> /usr/share/ca-certificates/121369.pem (1338 bytes)
	I1119 21:57:56.453376  125655 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 21:57:56.475453  125655 ssh_runner.go:195] Run: openssl version
	I1119 21:57:56.482740  125655 command_runner.go:130] > OpenSSL 3.4.1 11 Feb 2025 (Library: OpenSSL 3.4.1 11 Feb 2025)
	I1119 21:57:56.482959  125655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 21:57:56.496693  125655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:57:56.502262  125655 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:57:56.502418  125655 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:57:56.502483  125655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:57:56.510342  125655 command_runner.go:130] > b5213941
	I1119 21:57:56.510469  125655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 21:57:56.522344  125655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121369.pem && ln -fs /usr/share/ca-certificates/121369.pem /etc/ssl/certs/121369.pem"
	I1119 21:57:56.536631  125655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121369.pem
	I1119 21:57:56.542029  125655 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 19 21:54 /usr/share/ca-certificates/121369.pem
	I1119 21:57:56.542209  125655 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:54 /usr/share/ca-certificates/121369.pem
	I1119 21:57:56.542274  125655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121369.pem
	I1119 21:57:56.550380  125655 command_runner.go:130] > 51391683
	I1119 21:57:56.550501  125655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/121369.pem /etc/ssl/certs/51391683.0"
	I1119 21:57:56.561821  125655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1213692.pem && ln -fs /usr/share/ca-certificates/1213692.pem /etc/ssl/certs/1213692.pem"
	I1119 21:57:56.575290  125655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1213692.pem
	I1119 21:57:56.580784  125655 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 19 21:54 /usr/share/ca-certificates/1213692.pem
	I1119 21:57:56.581086  125655 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:54 /usr/share/ca-certificates/1213692.pem
	I1119 21:57:56.581144  125655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1213692.pem
	I1119 21:57:56.588956  125655 command_runner.go:130] > 3ec20f2e
	I1119 21:57:56.589037  125655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1213692.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 21:57:56.601212  125655 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 21:57:56.606946  125655 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 21:57:56.606978  125655 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1119 21:57:56.606987  125655 command_runner.go:130] > Device: 253,1	Inode: 9430692     Links: 1
	I1119 21:57:56.606996  125655 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1119 21:57:56.607006  125655 command_runner.go:130] > Access: 2025-11-19 21:55:24.817391754 +0000
	I1119 21:57:56.607014  125655 command_runner.go:130] > Modify: 2025-11-19 21:55:24.817391754 +0000
	I1119 21:57:56.607022  125655 command_runner.go:130] > Change: 2025-11-19 21:55:24.817391754 +0000
	I1119 21:57:56.607031  125655 command_runner.go:130] >  Birth: 2025-11-19 21:55:24.817391754 +0000
	I1119 21:57:56.607101  125655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 21:57:56.614717  125655 command_runner.go:130] > Certificate will not expire
	I1119 21:57:56.614807  125655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 21:57:56.622151  125655 command_runner.go:130] > Certificate will not expire
	I1119 21:57:56.622364  125655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 21:57:56.629649  125655 command_runner.go:130] > Certificate will not expire
	I1119 21:57:56.630010  125655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 21:57:56.637598  125655 command_runner.go:130] > Certificate will not expire
	I1119 21:57:56.637675  125655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 21:57:56.645478  125655 command_runner.go:130] > Certificate will not expire
	I1119 21:57:56.645584  125655 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 21:57:56.652788  125655 command_runner.go:130] > Certificate will not expire
	I1119 21:57:56.653060  125655 kubeadm.go:401] StartCluster: {Name:functional-274272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:functional-274272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:57:56.653151  125655 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:57:56.653212  125655 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:57:56.693386  125655 command_runner.go:130] > 0106e4f2ce61898cba6e2b7a948933217c923a83f47dc98f8e443135d7c1953c
	I1119 21:57:56.693418  125655 command_runner.go:130] > 27fcf5ffa9c5cce8c4adcef8f00caed64a4fab40166ca861d697c36836f01dc9
	I1119 21:57:56.693423  125655 command_runner.go:130] > 33d592c056efc2e2c713428bd1a12974c07ffee6ed0d926d5ef6cd0cca4db55d
	I1119 21:57:56.693431  125655 command_runner.go:130] > 899acc3a073d3b1e8a64e329e67a0c9a8014c3e5fb818300620298c210fd33f1
	I1119 21:57:56.693436  125655 command_runner.go:130] > cb88ba1c8d8cc6830004837e32b6220121207668f01c3560a51e2a8ce1e36ed3
	I1119 21:57:56.693441  125655 command_runner.go:130] > 077ec5fba8a3dcc009b2738e8bb85762db7ca0d2d2f4153471471dce4bb69d58
	I1119 21:57:56.693445  125655 command_runner.go:130] > f188ee072392f6539a4fc0dbc95ec1b18f76377b44c1ca821fc51f07d6c4ec6b
	I1119 21:57:56.693453  125655 command_runner.go:130] > 94bc4b1fc9ffc618caea337c906ff95370af6d466ae4d510d6d81512364b13b1
	I1119 21:57:56.693458  125655 command_runner.go:130] > 8a2841a674b031e27e3be5469070765869d325fd68f393a4e80843dbe974314f
	I1119 21:57:56.693463  125655 command_runner.go:130] > 37c3495f685326ea33fc62203369d3d57a6b53431dd8b780278885837239fdfc
	I1119 21:57:56.693468  125655 command_runner.go:130] > 931760b60afe373933492aa207a6c5231c4e48d65a493c258f05f5ff220173d5
	I1119 21:57:56.693473  125655 command_runner.go:130] > d2921c6f5f07b3d3a3ca4ccf47283bb6bb22d080b6da461bd67a7b1660db1191
	I1119 21:57:56.695025  125655 cri.go:89] found id: "0106e4f2ce61898cba6e2b7a948933217c923a83f47dc98f8e443135d7c1953c"
	I1119 21:57:56.695043  125655 cri.go:89] found id: "27fcf5ffa9c5cce8c4adcef8f00caed64a4fab40166ca861d697c36836f01dc9"
	I1119 21:57:56.695048  125655 cri.go:89] found id: "33d592c056efc2e2c713428bd1a12974c07ffee6ed0d926d5ef6cd0cca4db55d"
	I1119 21:57:56.695053  125655 cri.go:89] found id: "899acc3a073d3b1e8a64e329e67a0c9a8014c3e5fb818300620298c210fd33f1"
	I1119 21:57:56.695056  125655 cri.go:89] found id: "cb88ba1c8d8cc6830004837e32b6220121207668f01c3560a51e2a8ce1e36ed3"
	I1119 21:57:56.695061  125655 cri.go:89] found id: "077ec5fba8a3dcc009b2738e8bb85762db7ca0d2d2f4153471471dce4bb69d58"
	I1119 21:57:56.695067  125655 cri.go:89] found id: "f188ee072392f6539a4fc0dbc95ec1b18f76377b44c1ca821fc51f07d6c4ec6b"
	I1119 21:57:56.695072  125655 cri.go:89] found id: "94bc4b1fc9ffc618caea337c906ff95370af6d466ae4d510d6d81512364b13b1"
	I1119 21:57:56.695077  125655 cri.go:89] found id: "8a2841a674b031e27e3be5469070765869d325fd68f393a4e80843dbe974314f"
	I1119 21:57:56.695088  125655 cri.go:89] found id: "37c3495f685326ea33fc62203369d3d57a6b53431dd8b780278885837239fdfc"
	I1119 21:57:56.695093  125655 cri.go:89] found id: "931760b60afe373933492aa207a6c5231c4e48d65a493c258f05f5ff220173d5"
	I1119 21:57:56.695097  125655 cri.go:89] found id: "d2921c6f5f07b3d3a3ca4ccf47283bb6bb22d080b6da461bd67a7b1660db1191"
	I1119 21:57:56.695101  125655 cri.go:89] found id: ""
	I1119 21:57:56.695155  125655 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-274272 -n functional-274272
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-274272 -n functional-274272: exit status 2 (204.534087ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-274272" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (742.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-274272 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-274272 cache add registry.k8s.io/pause:3.1: (1.076698088s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-274272 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-274272 cache add registry.k8s.io/pause:3.3: signal: killed (559.62667ms)

                                                
                                                
** stderr ** 
	! "minikube cache" will be deprecated in upcoming versions, please switch to "minikube image load"

                                                
                                                
** /stderr **
functional_test.go:1066: failed to 'cache add' remote image "registry.k8s.io/pause:3.3". args "out/minikube-linux-amd64 -p functional-274272 cache add registry.k8s.io/pause:3.3" err signal: killed
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-274272 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-274272 cache add registry.k8s.io/pause:latest: context deadline exceeded (2.078ยตs)
functional_test.go:1066: failed to 'cache add' remote image "registry.k8s.io/pause:latest". args "out/minikube-linux-amd64 -p functional-274272 cache add registry.k8s.io/pause:latest" err context deadline exceeded
--- FAIL: TestFunctional/serial/CacheCmd/cache/add_remote (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
functional_test.go:1117: (dbg) Non-zero exit: out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3: context deadline exceeded (1.145ยตs)
functional_test.go:1119: failed to delete image registry.k8s.io/pause:3.3 from cache. args "out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3": context deadline exceeded
--- FAIL: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
functional_test.go:1125: (dbg) Non-zero exit: out/minikube-linux-amd64 cache list: context deadline exceeded (460ns)
functional_test.go:1127: failed to do cache list. args "out/minikube-linux-amd64 cache list": context deadline exceeded
functional_test.go:1130: expected 'cache list' output to include 'registry.k8s.io/pause' but got: ******
--- FAIL: TestFunctional/serial/CacheCmd/cache/list (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-274272 ssh sudo crictl images
functional_test.go:1139: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-274272 ssh sudo crictl images: context deadline exceeded (434ns)
functional_test.go:1141: failed to get images by "out/minikube-linux-amd64 -p functional-274272 ssh sudo crictl images" ssh context deadline exceeded
functional_test.go:1145: expected sha for pause:3.3 "0184c1613d929" to be in the output but got **
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-274272 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1162: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-274272 ssh sudo crictl rmi registry.k8s.io/pause:latest: context deadline exceeded (514ns)
functional_test.go:1165: failed to manually delete image "out/minikube-linux-amd64 -p functional-274272 ssh sudo crictl rmi registry.k8s.io/pause:latest" : context deadline exceeded
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-274272 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-274272 ssh sudo crictl inspecti registry.k8s.io/pause:latest: context deadline exceeded (369ns)
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-274272 cache reload
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-274272 cache reload: context deadline exceeded (153ns)
functional_test.go:1175: expected "out/minikube-linux-amd64 -p functional-274272 cache reload" to run successfully but got error: context deadline exceeded
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-274272 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1178: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-274272 ssh sudo crictl inspecti registry.k8s.io/pause:latest: context deadline exceeded (145ns)
functional_test.go:1180: expected "out/minikube-linux-amd64 -p functional-274272 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: context deadline exceeded
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Non-zero exit: out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1: context deadline exceeded (186ns)
functional_test.go:1189: failed to delete registry.k8s.io/pause:3.1 from cache. args "out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1": context deadline exceeded
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
functional_test.go:1187: (dbg) Non-zero exit: out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest: context deadline exceeded (146ns)
functional_test.go:1189: failed to delete registry.k8s.io/pause:latest from cache. args "out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest": context deadline exceeded
--- FAIL: TestFunctional/serial/CacheCmd/cache/delete (0.00s)

                                                
                                    
x
+
TestFunctional/parallel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel
functional_test.go:184: Unable to run more tests (deadline exceeded)
--- FAIL: TestFunctional/parallel (0.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (663.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-487903 stop --alsologtostderr -v 5: (4m17.089758262s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 start --wait true --alsologtostderr -v 5
E1119 22:59:25.094544  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:00:48.182816  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:04:25.095708  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-487903 start --wait true --alsologtostderr -v 5: exit status 80 (6m44.734267831s)

                                                
                                                
-- stdout --
	* [ha-487903] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21918
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21918-117497/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-117497/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "ha-487903" primary control-plane node in "ha-487903" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Enabled addons: 
	
	* Starting "ha-487903-m02" control-plane node in "ha-487903" cluster
	* Found network options:
	  - NO_PROXY=192.168.39.15
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.15
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 22:58:56.213053  140883 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:58:56.213329  140883 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:58:56.213337  140883 out.go:374] Setting ErrFile to fd 2...
	I1119 22:58:56.213342  140883 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:58:56.213519  140883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-117497/.minikube/bin
	I1119 22:58:56.213975  140883 out.go:368] Setting JSON to false
	I1119 22:58:56.214867  140883 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":16883,"bootTime":1763576253,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 22:58:56.215026  140883 start.go:143] virtualization: kvm guest
	I1119 22:58:56.217423  140883 out.go:179] * [ha-487903] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 22:58:56.219002  140883 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:58:56.219026  140883 notify.go:221] Checking for updates...
	I1119 22:58:56.221890  140883 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:58:56.223132  140883 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 22:58:56.224328  140883 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-117497/.minikube
	I1119 22:58:56.225456  140883 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 22:58:56.226526  140883 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:58:56.228080  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:58:56.228220  140883 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:58:56.264170  140883 out.go:179] * Using the kvm2 driver based on existing profile
	I1119 22:58:56.265437  140883 start.go:309] selected driver: kvm2
	I1119 22:58:56.265462  140883 start.go:930] validating driver "kvm2" against &{Name:ha-487903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false d
efault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:58:56.265642  140883 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:58:56.266633  140883 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:58:56.266714  140883 cni.go:84] Creating CNI manager for ""
	I1119 22:58:56.266798  140883 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1119 22:58:56.266898  140883 start.go:353] cluster config:
	{Name:ha-487903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:f
alse headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:58:56.267071  140883 iso.go:125] acquiring lock: {Name:mk95e5a645bfae75190ef550e02bd4f48b331040 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:58:56.269538  140883 out.go:179] * Starting "ha-487903" primary control-plane node in "ha-487903" cluster
	I1119 22:58:56.270926  140883 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:58:56.270958  140883 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 22:58:56.270984  140883 cache.go:65] Caching tarball of preloaded images
	I1119 22:58:56.271073  140883 preload.go:238] Found /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 22:58:56.271085  140883 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 22:58:56.271229  140883 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 22:58:56.271448  140883 start.go:360] acquireMachinesLock for ha-487903: {Name:mk31ae761a53f3900bcf08a8c89eb1fc69e23fe3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1119 22:58:56.271493  140883 start.go:364] duration metric: took 26.421ยตs to acquireMachinesLock for "ha-487903"
	I1119 22:58:56.271509  140883 start.go:96] Skipping create...Using existing machine configuration
	I1119 22:58:56.271522  140883 fix.go:54] fixHost starting: 
	I1119 22:58:56.273404  140883 fix.go:112] recreateIfNeeded on ha-487903: state=Stopped err=<nil>
	W1119 22:58:56.273427  140883 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 22:58:56.275031  140883 out.go:252] * Restarting existing kvm2 VM for "ha-487903" ...
	I1119 22:58:56.275084  140883 main.go:143] libmachine: starting domain...
	I1119 22:58:56.275096  140883 main.go:143] libmachine: ensuring networks are active...
	I1119 22:58:56.275845  140883 main.go:143] libmachine: Ensuring network default is active
	I1119 22:58:56.276258  140883 main.go:143] libmachine: Ensuring network mk-ha-487903 is active
	I1119 22:58:56.276731  140883 main.go:143] libmachine: getting domain XML...
	I1119 22:58:56.277856  140883 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>ha-487903</name>
	  <uuid>a1ad91e9-9cee-4f2a-89ce-da034e4410c0</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/ha-487903.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:a9:81:53'/>
	      <source network='mk-ha-487903'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:93:d5:3e'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1119 22:58:57.532843  140883 main.go:143] libmachine: waiting for domain to start...
	I1119 22:58:57.534321  140883 main.go:143] libmachine: domain is now running
	I1119 22:58:57.534360  140883 main.go:143] libmachine: waiting for IP...
	I1119 22:58:57.535171  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:58:57.535745  140883 main.go:143] libmachine: domain ha-487903 has current primary IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:58:57.535758  140883 main.go:143] libmachine: found domain IP: 192.168.39.15
	I1119 22:58:57.535763  140883 main.go:143] libmachine: reserving static IP address...
	I1119 22:58:57.536231  140883 main.go:143] libmachine: found host DHCP lease matching {name: "ha-487903", mac: "52:54:00:a9:81:53", ip: "192.168.39.15"} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:47:36 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:58:57.536255  140883 main.go:143] libmachine: skip adding static IP to network mk-ha-487903 - found existing host DHCP lease matching {name: "ha-487903", mac: "52:54:00:a9:81:53", ip: "192.168.39.15"}
	I1119 22:58:57.536263  140883 main.go:143] libmachine: reserved static IP address 192.168.39.15 for domain ha-487903
	I1119 22:58:57.536269  140883 main.go:143] libmachine: waiting for SSH...
	I1119 22:58:57.536284  140883 main.go:143] libmachine: Getting to WaitForSSH function...
	I1119 22:58:57.538607  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:58:57.538989  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:47:36 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:58:57.539013  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:58:57.539174  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:58:57.539442  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 22:58:57.539453  140883 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1119 22:59:00.588204  140883 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.15:22: connect: no route to host
	I1119 22:59:06.668207  140883 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.15:22: connect: no route to host
	I1119 22:59:09.789580  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:59:09.792830  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:09.793316  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:09.793339  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:09.793640  140883 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 22:59:09.793859  140883 machine.go:94] provisionDockerMachine start ...
	I1119 22:59:09.796160  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:09.796551  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:09.796574  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:09.796736  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:09.796945  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 22:59:09.796957  140883 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:59:09.920535  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1119 22:59:09.920579  140883 buildroot.go:166] provisioning hostname "ha-487903"
	I1119 22:59:09.924026  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:09.924613  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:09.924652  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:09.924920  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:09.925162  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 22:59:09.925179  140883 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-487903 && echo "ha-487903" | sudo tee /etc/hostname
	I1119 22:59:10.075390  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-487903
	
	I1119 22:59:10.078652  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.079199  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:10.079233  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.079435  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:10.079647  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 22:59:10.079675  140883 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-487903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-487903/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-487903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:59:10.221997  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:59:10.222032  140883 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21918-117497/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-117497/.minikube}
	I1119 22:59:10.222082  140883 buildroot.go:174] setting up certificates
	I1119 22:59:10.222102  140883 provision.go:84] configureAuth start
	I1119 22:59:10.225146  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.225685  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:10.225711  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.228217  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.228605  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:10.228627  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.228759  140883 provision.go:143] copyHostCerts
	I1119 22:59:10.228794  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 22:59:10.228835  140883 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem, removing ...
	I1119 22:59:10.228849  140883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 22:59:10.228933  140883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem (1078 bytes)
	I1119 22:59:10.229026  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 22:59:10.229051  140883 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem, removing ...
	I1119 22:59:10.229057  140883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 22:59:10.229096  140883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem (1123 bytes)
	I1119 22:59:10.229160  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 22:59:10.229185  140883 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem, removing ...
	I1119 22:59:10.229189  140883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 22:59:10.229230  140883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem (1675 bytes)
	I1119 22:59:10.229308  140883 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem org=jenkins.ha-487903 san=[127.0.0.1 192.168.39.15 ha-487903 localhost minikube]
	I1119 22:59:10.335910  140883 provision.go:177] copyRemoteCerts
	I1119 22:59:10.335996  140883 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:59:10.338770  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.339269  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:10.339307  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.339538  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:59:10.439975  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1119 22:59:10.440060  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1119 22:59:10.477861  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1119 22:59:10.477964  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 22:59:10.529406  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1119 22:59:10.529472  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 22:59:10.570048  140883 provision.go:87] duration metric: took 347.930624ms to configureAuth
	I1119 22:59:10.570076  140883 buildroot.go:189] setting minikube options for container-runtime
	I1119 22:59:10.570440  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:10.573510  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.573997  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:10.574034  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.574235  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:10.574507  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 22:59:10.574526  140883 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 22:59:10.838912  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 22:59:10.838950  140883 machine.go:97] duration metric: took 1.045075254s to provisionDockerMachine
	I1119 22:59:10.838968  140883 start.go:293] postStartSetup for "ha-487903" (driver="kvm2")
	I1119 22:59:10.838983  140883 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:59:10.839099  140883 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:59:10.842141  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.842656  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:10.842700  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.842857  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:59:10.941042  140883 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:59:10.946128  140883 info.go:137] Remote host: Buildroot 2025.02
	I1119 22:59:10.946154  140883 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/addons for local assets ...
	I1119 22:59:10.946218  140883 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/files for local assets ...
	I1119 22:59:10.946302  140883 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> 1213692.pem in /etc/ssl/certs
	I1119 22:59:10.946321  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /etc/ssl/certs/1213692.pem
	I1119 22:59:10.946415  140883 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:59:10.958665  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 22:59:10.989852  140883 start.go:296] duration metric: took 150.865435ms for postStartSetup
	I1119 22:59:10.989981  140883 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1119 22:59:10.992672  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.993117  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:10.993143  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.993318  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:59:11.080904  140883 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I1119 22:59:11.080983  140883 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1119 22:59:11.124462  140883 fix.go:56] duration metric: took 14.852929829s for fixHost
	I1119 22:59:11.127772  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:11.128299  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:11.128336  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:11.128547  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:11.128846  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 22:59:11.128865  140883 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1119 22:59:11.255539  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763593151.225105539
	
	I1119 22:59:11.255568  140883 fix.go:216] guest clock: 1763593151.225105539
	I1119 22:59:11.255578  140883 fix.go:229] Guest: 2025-11-19 22:59:11.225105539 +0000 UTC Remote: 2025-11-19 22:59:11.124499316 +0000 UTC m=+14.964187528 (delta=100.606223ms)
	I1119 22:59:11.255598  140883 fix.go:200] guest clock delta is within tolerance: 100.606223ms
	I1119 22:59:11.255604  140883 start.go:83] releasing machines lock for "ha-487903", held for 14.984100369s
	I1119 22:59:11.258588  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:11.259028  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:11.259061  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:11.259648  140883 ssh_runner.go:195] Run: cat /version.json
	I1119 22:59:11.259725  140883 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:59:11.262795  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:11.263203  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:11.263243  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:11.263270  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:11.263465  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:59:11.263776  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:11.263809  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:11.264018  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:59:11.345959  140883 ssh_runner.go:195] Run: systemctl --version
	I1119 22:59:11.373994  140883 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 22:59:11.522527  140883 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:59:11.531055  140883 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:59:11.531143  140883 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:59:11.555635  140883 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 22:59:11.555666  140883 start.go:496] detecting cgroup driver to use...
	I1119 22:59:11.555762  140883 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 22:59:11.592696  140883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 22:59:11.617501  140883 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:59:11.617572  140883 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:59:11.636732  140883 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:59:11.654496  140883 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:59:11.811000  140883 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:59:12.032082  140883 docker.go:234] disabling docker service ...
	I1119 22:59:12.032160  140883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:59:12.048543  140883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:59:12.064141  140883 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:59:12.225964  140883 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:59:12.368239  140883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:59:12.384716  140883 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:59:12.408056  140883 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 22:59:12.408120  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:12.421146  140883 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 22:59:12.421223  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:12.434510  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:12.447609  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:12.460732  140883 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:59:12.477217  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:12.489987  140883 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:12.511524  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:12.524517  140883 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:59:12.536463  140883 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1119 22:59:12.536536  140883 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1119 22:59:12.563021  140883 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:59:12.578130  140883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:59:12.729736  140883 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 22:59:12.855038  140883 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 22:59:12.855107  140883 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 22:59:12.860898  140883 start.go:564] Will wait 60s for crictl version
	I1119 22:59:12.860954  140883 ssh_runner.go:195] Run: which crictl
	I1119 22:59:12.865294  140883 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1119 22:59:12.912486  140883 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1119 22:59:12.912590  140883 ssh_runner.go:195] Run: crio --version
	I1119 22:59:12.943910  140883 ssh_runner.go:195] Run: crio --version
	I1119 22:59:12.976663  140883 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1119 22:59:12.980411  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:12.980805  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:12.980827  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:12.981017  140883 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1119 22:59:12.986162  140883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:59:13.003058  140883 kubeadm.go:884] updating cluster {Name:ha-487903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stor
ageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:59:13.003281  140883 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:59:13.003338  140883 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:59:13.047712  140883 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1119 22:59:13.047780  140883 ssh_runner.go:195] Run: which lz4
	I1119 22:59:13.052977  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1119 22:59:13.053081  140883 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1119 22:59:13.058671  140883 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1119 22:59:13.058708  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1119 22:59:14.697348  140883 crio.go:462] duration metric: took 1.644299269s to copy over tarball
	I1119 22:59:14.697449  140883 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1119 22:59:16.447188  140883 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.749702497s)
	I1119 22:59:16.447222  140883 crio.go:469] duration metric: took 1.749848336s to extract the tarball
	I1119 22:59:16.447231  140883 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1119 22:59:16.489289  140883 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:59:16.536108  140883 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:59:16.536132  140883 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:59:16.536140  140883 kubeadm.go:935] updating node { 192.168.39.15 8443 v1.34.1 crio true true} ...
	I1119 22:59:16.536265  140883 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-487903 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:59:16.536328  140883 ssh_runner.go:195] Run: crio config
	I1119 22:59:16.585135  140883 cni.go:84] Creating CNI manager for ""
	I1119 22:59:16.585158  140883 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1119 22:59:16.585181  140883 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:59:16.585202  140883 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.15 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-487903 NodeName:ha-487903 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:59:16.585355  140883 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-487903"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.15"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.15"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:59:16.585375  140883 kube-vip.go:115] generating kube-vip config ...
	I1119 22:59:16.585419  140883 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1119 22:59:16.615712  140883 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1119 22:59:16.615824  140883 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1119 22:59:16.615913  140883 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:59:16.633015  140883 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:59:16.633116  140883 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1119 22:59:16.646138  140883 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1119 22:59:16.668865  140883 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:59:16.691000  140883 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1119 22:59:16.713854  140883 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1119 22:59:16.736483  140883 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1119 22:59:16.741324  140883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:59:16.757055  140883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:59:16.900472  140883 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:59:16.922953  140883 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903 for IP: 192.168.39.15
	I1119 22:59:16.922982  140883 certs.go:195] generating shared ca certs ...
	I1119 22:59:16.922999  140883 certs.go:227] acquiring lock for ca certs: {Name:mk7fd1f1adfef6333505c39c1a982465562c820e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:59:16.923147  140883 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key
	I1119 22:59:16.923233  140883 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key
	I1119 22:59:16.923245  140883 certs.go:257] generating profile certs ...
	I1119 22:59:16.923340  140883 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key
	I1119 22:59:16.923369  140883 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key.4e6b2c30
	I1119 22:59:16.923388  140883 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt.4e6b2c30 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.15 192.168.39.191 192.168.39.160 192.168.39.254]
	I1119 22:59:17.222295  140883 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt.4e6b2c30 ...
	I1119 22:59:17.222330  140883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt.4e6b2c30: {Name:mk1efb8fb5e10ff1c6bc1bceec2ebc4b1a4cdce5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:59:17.222507  140883 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key.4e6b2c30 ...
	I1119 22:59:17.222521  140883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key.4e6b2c30: {Name:mk99b1381f2cff273ee01fc482a9705b00bd6fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:59:17.222598  140883 certs.go:382] copying /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt.4e6b2c30 -> /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt
	I1119 22:59:17.224167  140883 certs.go:386] copying /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key.4e6b2c30 -> /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key
	I1119 22:59:17.227659  140883 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key
	I1119 22:59:17.227687  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1119 22:59:17.227700  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1119 22:59:17.227711  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1119 22:59:17.227725  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1119 22:59:17.227746  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1119 22:59:17.227763  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1119 22:59:17.227778  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1119 22:59:17.227791  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1119 22:59:17.227853  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem (1338 bytes)
	W1119 22:59:17.227922  140883 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369_empty.pem, impossibly tiny 0 bytes
	I1119 22:59:17.227938  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 22:59:17.227968  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem (1078 bytes)
	I1119 22:59:17.228003  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:59:17.228035  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem (1675 bytes)
	I1119 22:59:17.228085  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 22:59:17.228122  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /usr/share/ca-certificates/1213692.pem
	I1119 22:59:17.228146  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:59:17.228164  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem -> /usr/share/ca-certificates/121369.pem
	I1119 22:59:17.228751  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:59:17.267457  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 22:59:17.301057  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:59:17.334334  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:59:17.369081  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 22:59:17.401525  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 22:59:17.435168  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:59:17.468258  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 22:59:17.501844  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /usr/share/ca-certificates/1213692.pem (1708 bytes)
	I1119 22:59:17.535729  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:59:17.568773  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem --> /usr/share/ca-certificates/121369.pem (1338 bytes)
	I1119 22:59:17.602167  140883 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:59:17.625050  140883 ssh_runner.go:195] Run: openssl version
	I1119 22:59:17.631971  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121369.pem && ln -fs /usr/share/ca-certificates/121369.pem /etc/ssl/certs/121369.pem"
	I1119 22:59:17.646313  140883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121369.pem
	I1119 22:59:17.652083  140883 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:54 /usr/share/ca-certificates/121369.pem
	I1119 22:59:17.652141  140883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121369.pem
	I1119 22:59:17.660153  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/121369.pem /etc/ssl/certs/51391683.0"
	I1119 22:59:17.675854  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1213692.pem && ln -fs /usr/share/ca-certificates/1213692.pem /etc/ssl/certs/1213692.pem"
	I1119 22:59:17.691421  140883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1213692.pem
	I1119 22:59:17.697623  140883 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:54 /usr/share/ca-certificates/1213692.pem
	I1119 22:59:17.697704  140883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1213692.pem
	I1119 22:59:17.706162  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1213692.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:59:17.721477  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:59:17.736953  140883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:59:17.743111  140883 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:59:17.743185  140883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:59:17.751321  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:59:17.766690  140883 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:59:17.773200  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 22:59:17.781700  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 22:59:17.790000  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 22:59:17.798411  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 22:59:17.807029  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 22:59:17.815374  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 22:59:17.823385  140883 kubeadm.go:401] StartCluster: {Name:ha-487903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Clust
erName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:59:17.823559  140883 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 22:59:17.823640  140883 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:59:17.865475  140883 cri.go:89] found id: ""
	I1119 22:59:17.865542  140883 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:59:17.879260  140883 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 22:59:17.879283  140883 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 22:59:17.879329  140883 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 22:59:17.892209  140883 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 22:59:17.892673  140883 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-487903" does not appear in /home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 22:59:17.892815  140883 kubeconfig.go:62] /home/jenkins/minikube-integration/21918-117497/kubeconfig needs updating (will repair): [kubeconfig missing "ha-487903" cluster setting kubeconfig missing "ha-487903" context setting]
	I1119 22:59:17.893101  140883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/kubeconfig: {Name:mk079b5c589536bd895a38d6eaf0adbccf891fc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:59:17.963155  140883 kapi.go:59] client config for ha-487903: &rest.Config{Host:"https://192.168.39.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key", CAFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1119 22:59:17.963616  140883 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1119 22:59:17.963633  140883 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1119 22:59:17.963638  140883 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1119 22:59:17.963643  140883 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1119 22:59:17.963647  140883 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1119 22:59:17.963661  140883 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1119 22:59:17.964167  140883 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 22:59:17.978087  140883 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.15
	I1119 22:59:17.978113  140883 kubeadm.go:602] duration metric: took 98.825282ms to restartPrimaryControlPlane
	I1119 22:59:17.978123  140883 kubeadm.go:403] duration metric: took 154.749827ms to StartCluster
	I1119 22:59:17.978140  140883 settings.go:142] acquiring lock: {Name:mk7bf46f049c1d627501587bc2954f8687f12cc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:59:17.978206  140883 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 22:59:17.978813  140883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/kubeconfig: {Name:mk079b5c589536bd895a38d6eaf0adbccf891fc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:59:18.091922  140883 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:59:18.091962  140883 start.go:242] waiting for startup goroutines ...
	I1119 22:59:18.091978  140883 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:59:18.092251  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:18.092350  140883 cache.go:107] acquiring lock: {Name:mk7073b8eca670d2c11dece9275947688dc7c859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:59:18.092431  140883 cache.go:115] /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1119 22:59:18.092446  140883 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 107.182ยตs
	I1119 22:59:18.092458  140883 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1119 22:59:18.092470  140883 cache.go:87] Successfully saved all images to host disk.
	I1119 22:59:18.092656  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:18.094431  140883 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:59:18.096799  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:18.097214  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:18.097237  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:18.097408  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:59:18.112697  140883 out.go:179] * Enabled addons: 
	I1119 22:59:18.214740  140883 crio.go:510] couldn't find preloaded image for "registry.k8s.io/pause:3.1". assuming images are not preloaded.
	I1119 22:59:18.214767  140883 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/pause:3.1]
	I1119 22:59:18.214826  140883 image.go:138] retrieving image: registry.k8s.io/pause:3.1
	I1119 22:59:18.216192  140883 image.go:181] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1119 22:59:18.232424  140883 addons.go:515] duration metric: took 140.445828ms for enable addons: enabled=[]
	I1119 22:59:18.373430  140883 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1119 22:59:18.424783  140883 cache_images.go:118] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1119 22:59:18.424834  140883 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1119 22:59:18.424902  140883 ssh_runner.go:195] Run: which crictl
	I1119 22:59:18.429834  140883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1119 22:59:18.468042  140883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1119 22:59:18.506995  140883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1119 22:59:18.543506  140883 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1119 22:59:18.543547  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 -> /var/lib/minikube/images/pause_3.1
	I1119 22:59:18.543609  140883 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.1
	I1119 22:59:18.549496  140883 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1119 22:59:18.549518  140883 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.1
	I1119 22:59:18.549580  140883 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1119 22:59:19.599140  140883 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.049525714s)
	I1119 22:59:19.599189  140883 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1119 22:59:19.599233  140883 cache_images.go:125] Successfully loaded all cached images
	I1119 22:59:19.599247  140883 cache_images.go:94] duration metric: took 1.384470883s to LoadCachedImages
	I1119 22:59:19.604407  140883 cache_images.go:264] succeeded pushing to: ha-487903
	I1119 22:59:19.604463  140883 start.go:247] waiting for cluster config update ...
	I1119 22:59:19.604477  140883 start.go:256] writing updated cluster config ...
	I1119 22:59:19.606572  140883 out.go:203] 
	I1119 22:59:19.608121  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:19.608254  140883 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 22:59:19.609863  140883 out.go:179] * Starting "ha-487903-m02" control-plane node in "ha-487903" cluster
	I1119 22:59:19.611047  140883 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:59:19.611074  140883 cache.go:65] Caching tarball of preloaded images
	I1119 22:59:19.611204  140883 preload.go:238] Found /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 22:59:19.611221  140883 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 22:59:19.611355  140883 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 22:59:19.611616  140883 start.go:360] acquireMachinesLock for ha-487903-m02: {Name:mk31ae761a53f3900bcf08a8c89eb1fc69e23fe3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1119 22:59:19.611685  140883 start.go:364] duration metric: took 40.215ยตs to acquireMachinesLock for "ha-487903-m02"
	I1119 22:59:19.611709  140883 start.go:96] Skipping create...Using existing machine configuration
	I1119 22:59:19.611717  140883 fix.go:54] fixHost starting: m02
	I1119 22:59:19.613410  140883 fix.go:112] recreateIfNeeded on ha-487903-m02: state=Stopped err=<nil>
	W1119 22:59:19.613431  140883 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 22:59:19.615098  140883 out.go:252] * Restarting existing kvm2 VM for "ha-487903-m02" ...
	I1119 22:59:19.615142  140883 main.go:143] libmachine: starting domain...
	I1119 22:59:19.615165  140883 main.go:143] libmachine: ensuring networks are active...
	I1119 22:59:19.616007  140883 main.go:143] libmachine: Ensuring network default is active
	I1119 22:59:19.616405  140883 main.go:143] libmachine: Ensuring network mk-ha-487903 is active
	I1119 22:59:19.616894  140883 main.go:143] libmachine: getting domain XML...
	I1119 22:59:19.618210  140883 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>ha-487903-m02</name>
	  <uuid>dcc51fc7-a2ff-40ae-988d-da36299d6bbc</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/ha-487903-m02.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:04:d5:70'/>
	      <source network='mk-ha-487903'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:9b:1d:f0'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1119 22:59:20.934173  140883 main.go:143] libmachine: waiting for domain to start...
	I1119 22:59:20.935608  140883 main.go:143] libmachine: domain is now running
	I1119 22:59:20.935632  140883 main.go:143] libmachine: waiting for IP...
	I1119 22:59:20.936409  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:20.936918  140883 main.go:143] libmachine: domain ha-487903-m02 has current primary IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:20.936934  140883 main.go:143] libmachine: found domain IP: 192.168.39.191
	I1119 22:59:20.936940  140883 main.go:143] libmachine: reserving static IP address...
	I1119 22:59:20.937407  140883 main.go:143] libmachine: found host DHCP lease matching {name: "ha-487903-m02", mac: "52:54:00:04:d5:70", ip: "192.168.39.191"} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:54:10 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:20.937433  140883 main.go:143] libmachine: skip adding static IP to network mk-ha-487903 - found existing host DHCP lease matching {name: "ha-487903-m02", mac: "52:54:00:04:d5:70", ip: "192.168.39.191"}
	I1119 22:59:20.937445  140883 main.go:143] libmachine: reserved static IP address 192.168.39.191 for domain ha-487903-m02
	I1119 22:59:20.937450  140883 main.go:143] libmachine: waiting for SSH...
	I1119 22:59:20.937455  140883 main.go:143] libmachine: Getting to WaitForSSH function...
	I1119 22:59:20.939837  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:20.940340  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:54:10 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:20.940366  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:20.940532  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:20.940720  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 22:59:20.940730  140883 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1119 22:59:24.012154  140883 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I1119 22:59:30.092142  140883 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I1119 22:59:33.096094  140883 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: connection refused
	I1119 22:59:36.206633  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:59:36.209899  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.210391  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:36.210411  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.210732  140883 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 22:59:36.211013  140883 machine.go:94] provisionDockerMachine start ...
	I1119 22:59:36.213527  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.213977  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:36.214002  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.214194  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:36.214405  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 22:59:36.214418  140883 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:59:36.326740  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1119 22:59:36.326781  140883 buildroot.go:166] provisioning hostname "ha-487903-m02"
	I1119 22:59:36.329446  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.329910  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:36.329941  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.330096  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:36.330305  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 22:59:36.330321  140883 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-487903-m02 && echo "ha-487903-m02" | sudo tee /etc/hostname
	I1119 22:59:36.457310  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-487903-m02
	
	I1119 22:59:36.460161  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.460619  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:36.460649  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.460898  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:36.461143  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 22:59:36.461164  140883 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-487903-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-487903-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-487903-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:59:36.581648  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:59:36.581677  140883 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21918-117497/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-117497/.minikube}
	I1119 22:59:36.581693  140883 buildroot.go:174] setting up certificates
	I1119 22:59:36.581705  140883 provision.go:84] configureAuth start
	I1119 22:59:36.585049  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.585711  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:36.585755  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.588067  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.588494  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:36.588521  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.588646  140883 provision.go:143] copyHostCerts
	I1119 22:59:36.588674  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 22:59:36.588706  140883 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem, removing ...
	I1119 22:59:36.588714  140883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 22:59:36.588769  140883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem (1675 bytes)
	I1119 22:59:36.588842  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 22:59:36.588860  140883 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem, removing ...
	I1119 22:59:36.588866  140883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 22:59:36.588903  140883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem (1078 bytes)
	I1119 22:59:36.589025  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 22:59:36.589050  140883 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem, removing ...
	I1119 22:59:36.589057  140883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 22:59:36.589079  140883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem (1123 bytes)
	I1119 22:59:36.589147  140883 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem org=jenkins.ha-487903-m02 san=[127.0.0.1 192.168.39.191 ha-487903-m02 localhost minikube]
	I1119 22:59:36.826031  140883 provision.go:177] copyRemoteCerts
	I1119 22:59:36.826092  140883 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:59:36.828610  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.829058  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:36.829082  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.829236  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 22:59:36.914853  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1119 22:59:36.914951  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 22:59:36.947443  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1119 22:59:36.947526  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1119 22:59:36.979006  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1119 22:59:36.979097  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 22:59:37.010933  140883 provision.go:87] duration metric: took 429.212672ms to configureAuth
	I1119 22:59:37.010966  140883 buildroot.go:189] setting minikube options for container-runtime
	I1119 22:59:37.011249  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:37.014321  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.014846  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:37.014890  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.015134  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:37.015408  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 22:59:37.015434  140883 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 22:59:37.258599  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 22:59:37.258634  140883 machine.go:97] duration metric: took 1.047602081s to provisionDockerMachine
	I1119 22:59:37.258649  140883 start.go:293] postStartSetup for "ha-487903-m02" (driver="kvm2")
	I1119 22:59:37.258662  140883 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:59:37.258718  140883 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:59:37.261730  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.262218  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:37.262247  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.262427  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 22:59:37.348416  140883 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:59:37.353564  140883 info.go:137] Remote host: Buildroot 2025.02
	I1119 22:59:37.353602  140883 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/addons for local assets ...
	I1119 22:59:37.353676  140883 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/files for local assets ...
	I1119 22:59:37.353750  140883 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> 1213692.pem in /etc/ssl/certs
	I1119 22:59:37.353760  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /etc/ssl/certs/1213692.pem
	I1119 22:59:37.353845  140883 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:59:37.366805  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 22:59:37.398923  140883 start.go:296] duration metric: took 140.253592ms for postStartSetup
	I1119 22:59:37.399023  140883 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1119 22:59:37.401945  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.402392  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:37.402417  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.402579  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 22:59:37.493849  140883 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I1119 22:59:37.493957  140883 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1119 22:59:37.556929  140883 fix.go:56] duration metric: took 17.945204618s for fixHost
	I1119 22:59:37.560155  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.560693  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:37.560730  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.560998  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:37.561206  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 22:59:37.561217  140883 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1119 22:59:37.681336  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763593177.645225217
	
	I1119 22:59:37.681361  140883 fix.go:216] guest clock: 1763593177.645225217
	I1119 22:59:37.681369  140883 fix.go:229] Guest: 2025-11-19 22:59:37.645225217 +0000 UTC Remote: 2025-11-19 22:59:37.55695737 +0000 UTC m=+41.396645577 (delta=88.267847ms)
	I1119 22:59:37.681385  140883 fix.go:200] guest clock delta is within tolerance: 88.267847ms
	I1119 22:59:37.681391  140883 start.go:83] releasing machines lock for "ha-487903-m02", held for 18.069691628s
	I1119 22:59:37.684191  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.684592  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:37.684617  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.686578  140883 out.go:179] * Found network options:
	I1119 22:59:37.687756  140883 out.go:179]   - NO_PROXY=192.168.39.15
	W1119 22:59:37.688974  140883 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 22:59:37.689312  140883 proxy.go:120] fail to check proxy env: Error ip not in block
	I1119 22:59:37.689391  140883 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 22:59:37.689429  140883 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:59:37.692557  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.692674  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.693088  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:37.693119  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.693175  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:37.693199  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.693322  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 22:59:37.693519  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 22:59:37.917934  140883 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:59:37.925533  140883 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:59:37.925625  140883 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:59:37.946707  140883 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 22:59:37.946736  140883 start.go:496] detecting cgroup driver to use...
	I1119 22:59:37.946815  140883 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 22:59:37.972689  140883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 22:59:37.990963  140883 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:59:37.991033  140883 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:59:38.008725  140883 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:59:38.025289  140883 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:59:38.180260  140883 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:59:38.396498  140883 docker.go:234] disabling docker service ...
	I1119 22:59:38.396561  140883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:59:38.413974  140883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:59:38.429993  140883 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:59:38.600828  140883 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:59:38.743771  140883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:59:38.761006  140883 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:59:38.784784  140883 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 22:59:38.784849  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:38.797617  140883 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 22:59:38.797682  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:38.810823  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:38.824064  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:38.837310  140883 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:59:38.851169  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:38.864106  140883 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:38.884838  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:38.897998  140883 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:59:38.909976  140883 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1119 22:59:38.910055  140883 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1119 22:59:38.933644  140883 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:59:38.947853  140883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:59:39.102667  140883 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 22:59:39.237328  140883 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 22:59:39.237425  140883 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 22:59:39.244051  140883 start.go:564] Will wait 60s for crictl version
	I1119 22:59:39.244122  140883 ssh_runner.go:195] Run: which crictl
	I1119 22:59:39.248522  140883 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1119 22:59:39.290126  140883 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1119 22:59:39.290249  140883 ssh_runner.go:195] Run: crio --version
	I1119 22:59:39.321443  140883 ssh_runner.go:195] Run: crio --version
	I1119 22:59:39.354869  140883 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1119 22:59:39.356230  140883 out.go:179]   - env NO_PROXY=192.168.39.15
	I1119 22:59:39.359840  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:39.360302  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:39.360329  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:39.360492  140883 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1119 22:59:39.365473  140883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:59:39.382234  140883 mustload.go:66] Loading cluster: ha-487903
	I1119 22:59:39.382499  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:39.384228  140883 host.go:66] Checking if "ha-487903" exists ...
	I1119 22:59:39.384424  140883 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903 for IP: 192.168.39.191
	I1119 22:59:39.384434  140883 certs.go:195] generating shared ca certs ...
	I1119 22:59:39.384461  140883 certs.go:227] acquiring lock for ca certs: {Name:mk7fd1f1adfef6333505c39c1a982465562c820e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:59:39.384590  140883 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key
	I1119 22:59:39.384635  140883 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key
	I1119 22:59:39.384645  140883 certs.go:257] generating profile certs ...
	I1119 22:59:39.384719  140883 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key
	I1119 22:59:39.384773  140883 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key.4e640f1f
	I1119 22:59:39.384805  140883 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key
	I1119 22:59:39.384819  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1119 22:59:39.384832  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1119 22:59:39.384842  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1119 22:59:39.384852  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1119 22:59:39.384862  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1119 22:59:39.384884  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1119 22:59:39.384898  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1119 22:59:39.384910  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1119 22:59:39.384960  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem (1338 bytes)
	W1119 22:59:39.384991  140883 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369_empty.pem, impossibly tiny 0 bytes
	I1119 22:59:39.385000  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 22:59:39.385020  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem (1078 bytes)
	I1119 22:59:39.385051  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:59:39.385085  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem (1675 bytes)
	I1119 22:59:39.385135  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 22:59:39.385162  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem -> /usr/share/ca-certificates/121369.pem
	I1119 22:59:39.385175  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /usr/share/ca-certificates/1213692.pem
	I1119 22:59:39.385187  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:59:39.387504  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:39.387909  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:39.387931  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:39.388082  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:59:39.466324  140883 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1119 22:59:39.472238  140883 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1119 22:59:39.488702  140883 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1119 22:59:39.494809  140883 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1119 22:59:39.508912  140883 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1119 22:59:39.514176  140883 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1119 22:59:39.528869  140883 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1119 22:59:39.534452  140883 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1119 22:59:39.547082  140883 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1119 22:59:39.552126  140883 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1119 22:59:39.565359  140883 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1119 22:59:39.570444  140883 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1119 22:59:39.583069  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:59:39.616063  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 22:59:39.649495  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:59:39.681835  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:59:39.717138  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 22:59:39.749821  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 22:59:39.782202  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:59:39.813910  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 22:59:39.846088  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem --> /usr/share/ca-certificates/121369.pem (1338 bytes)
	I1119 22:59:39.879164  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /usr/share/ca-certificates/1213692.pem (1708 bytes)
	I1119 22:59:39.911376  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:59:39.944000  140883 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1119 22:59:39.967306  140883 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1119 22:59:39.990350  140883 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1119 22:59:40.013301  140883 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1119 22:59:40.037246  140883 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1119 22:59:40.060715  140883 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1119 22:59:40.086564  140883 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1119 22:59:40.109633  140883 ssh_runner.go:195] Run: openssl version
	I1119 22:59:40.116643  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1213692.pem && ln -fs /usr/share/ca-certificates/1213692.pem /etc/ssl/certs/1213692.pem"
	I1119 22:59:40.130925  140883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1213692.pem
	I1119 22:59:40.136232  140883 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:54 /usr/share/ca-certificates/1213692.pem
	I1119 22:59:40.136304  140883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1213692.pem
	I1119 22:59:40.144024  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1213692.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:59:40.158532  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:59:40.172833  140883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:59:40.178370  140883 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:59:40.178436  140883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:59:40.186337  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:59:40.203312  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121369.pem && ln -fs /usr/share/ca-certificates/121369.pem /etc/ssl/certs/121369.pem"
	I1119 22:59:40.218899  140883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121369.pem
	I1119 22:59:40.224542  140883 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:54 /usr/share/ca-certificates/121369.pem
	I1119 22:59:40.224604  140883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121369.pem
	I1119 22:59:40.232341  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/121369.pem /etc/ssl/certs/51391683.0"
	I1119 22:59:40.246745  140883 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:59:40.252498  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 22:59:40.260177  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 22:59:40.267967  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 22:59:40.275670  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 22:59:40.282924  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 22:59:40.290564  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 22:59:40.297918  140883 kubeadm.go:935] updating node {m02 192.168.39.191 8443 v1.34.1 crio true true} ...
	I1119 22:59:40.298017  140883 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-487903-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.191
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:59:40.298041  140883 kube-vip.go:115] generating kube-vip config ...
	I1119 22:59:40.298079  140883 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1119 22:59:40.326946  140883 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1119 22:59:40.327021  140883 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1119 22:59:40.327086  140883 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:59:40.341513  140883 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:59:40.341602  140883 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1119 22:59:40.355326  140883 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1119 22:59:40.377667  140883 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:59:40.398672  140883 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1119 22:59:40.420213  140883 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1119 22:59:40.424583  140883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:59:40.440499  140883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:59:40.591016  140883 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:59:40.625379  140883 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:59:40.625713  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:40.625790  140883 cache.go:107] acquiring lock: {Name:mk7073b8eca670d2c11dece9275947688dc7c859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:59:40.625904  140883 cache.go:115] /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1119 22:59:40.625917  140883 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 136.601ยตs
	I1119 22:59:40.625925  140883 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1119 22:59:40.625932  140883 cache.go:87] Successfully saved all images to host disk.
	I1119 22:59:40.626121  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:40.628127  140883 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:59:40.628799  140883 out.go:179] * Verifying Kubernetes components...
	I1119 22:59:40.630163  140883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:59:40.631018  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:40.631564  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:40.631591  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:40.631793  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:59:40.839364  140883 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:59:40.839885  140883 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:59:40.839904  140883 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:59:40.842028  140883 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:59:40.844857  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:40.845326  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:40.845355  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:40.845505  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 22:59:40.874910  140883 kapi.go:59] client config for ha-487903: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key", CAFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1119 22:59:40.875019  140883 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.15:8443
	I1119 22:59:40.875298  140883 node_ready.go:35] waiting up to 6m0s for node "ha-487903-m02" to be "Ready" ...
	I1119 22:59:41.009083  140883 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:59:41.009116  140883 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:59:41.012633  140883 cache_images.go:264] succeeded pushing to: ha-487903 ha-487903-m02
	W1119 22:59:42.877100  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 22:59:45.376962  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 22:59:47.876896  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 22:59:50.376621  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 22:59:52.876228  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 22:59:54.876713  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 22:59:57.376562  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 22:59:59.377033  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:01.876165  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:03.876776  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:06.376476  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:08.377072  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:10.876811  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:13.377002  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:15.876574  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:18.376735  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:20.876598  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:23.376586  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:25.876567  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:28.376801  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:30.876662  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:32.877033  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:35.376969  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:37.876252  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:39.876923  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:42.376913  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:44.876958  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:47.376642  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:49.376843  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:51.876453  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:54.376840  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:56.876275  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:58.876988  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:01.376794  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:03.876458  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:05.876958  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:08.377013  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:10.876929  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:13.376449  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:15.376525  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:17.376595  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:19.876563  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:22.376426  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:24.376713  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:26.376833  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:28.876508  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:30.876839  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:32.877099  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:35.377181  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:37.876444  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:39.877103  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:42.376270  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:44.376706  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:46.876968  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:49.376048  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:51.376320  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:53.376410  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:55.376492  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:57.876667  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:59.876765  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:02.376272  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:04.376314  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:06.376754  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:08.877042  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:11.376455  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:13.376516  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:15.376798  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:17.377028  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:19.876336  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:22.376568  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:24.876573  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:27.376833  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:29.876498  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:31.876574  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:34.376621  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:36.376848  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:38.877032  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:41.376174  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:43.377081  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:45.876182  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:48.376695  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:50.876795  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:52.876945  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:55.377185  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:57.876221  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:59.876403  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:01.876497  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:04.376656  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:06.376862  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:08.876459  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:11.376839  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:13.877025  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:16.376692  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:18.377137  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:20.876509  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:22.876756  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:25.377113  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:27.876224  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:29.876784  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:31.877072  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:34.376176  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:36.876300  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:38.876430  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:41.376746  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:43.876333  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:46.376700  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:48.376946  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:50.376983  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:52.876447  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:55.376396  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:57.876511  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:00.376395  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:02.376931  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:04.876111  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:06.876594  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:09.376360  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:11.377039  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:13.876739  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:15.876954  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:18.376383  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:20.376729  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:22.877074  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:25.376970  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:27.377128  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:29.876794  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:32.376301  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:34.876592  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:37.376959  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:39.876221  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:41.876361  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:43.876530  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:45.877025  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:48.376319  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:50.376838  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:52.376995  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:54.876470  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:56.876815  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:59.376319  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:01.376715  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:03.376967  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:05.876573  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:07.877236  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:10.376866  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:12.876650  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:15.376722  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:17.876558  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:20.376338  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:22.876342  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:25.376231  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:27.876076  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:30.376254  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:32.376759  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:34.876778  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:37.376154  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:39.377025  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	I1119 23:05:40.875796  140883 node_ready.go:38] duration metric: took 6m0.000466447s for node "ha-487903-m02" to be "Ready" ...
	I1119 23:05:40.877762  140883 out.go:203] 
	W1119 23:05:40.878901  140883 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1119 23:05:40.878921  140883 out.go:285] * 
	* 
	W1119 23:05:40.880854  140883 out.go:308] โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
	โ”‚                                                                                             โ”‚
	โ”‚    * If the above advice does not help, please let us know:                                 โ”‚
	โ”‚      https://github.com/kubernetes/minikube/issues/new/choose                               โ”‚
	โ”‚                                                                                             โ”‚
	โ”‚    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    โ”‚
	โ”‚                                                                                             โ”‚
	โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
	โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
	โ”‚                                                                                             โ”‚
	โ”‚    * If the above advice does not help, please let us know:                                 โ”‚
	โ”‚      https://github.com/kubernetes/minikube/issues/new/choose                               โ”‚
	โ”‚                                                                                             โ”‚
	โ”‚    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    โ”‚
	โ”‚                                                                                             โ”‚
	โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
	I1119 23:05:40.882387  140883 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-amd64 -p ha-487903 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 node list --alsologtostderr -v 5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-487903 -n ha-487903
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-487903 -n ha-487903: exit status 2 (214.325785ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
	โ”‚ COMMAND โ”‚                                                                ARGS                                                                 โ”‚  PROFILE  โ”‚  USER   โ”‚ VERSION โ”‚     START TIME      โ”‚      END TIME       โ”‚
	โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m03:/home/docker/cp-test.txt ha-487903-m02:/home/docker/cp-test_ha-487903-m03_ha-487903-m02.txt              โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m03 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m02 sudo cat /home/docker/cp-test_ha-487903-m03_ha-487903-m02.txt                                        โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m03:/home/docker/cp-test.txt ha-487903-m04:/home/docker/cp-test_ha-487903-m03_ha-487903-m04.txt              โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m03 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test_ha-487903-m03_ha-487903-m04.txt                                        โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp testdata/cp-test.txt ha-487903-m04:/home/docker/cp-test.txt                                                            โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile651617511/001/cp-test_ha-487903-m04.txt โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m04:/home/docker/cp-test.txt ha-487903:/home/docker/cp-test_ha-487903-m04_ha-487903.txt                      โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903 sudo cat /home/docker/cp-test_ha-487903-m04_ha-487903.txt                                                โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m04:/home/docker/cp-test.txt ha-487903-m02:/home/docker/cp-test_ha-487903-m04_ha-487903-m02.txt              โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m02 sudo cat /home/docker/cp-test_ha-487903-m04_ha-487903-m02.txt                                        โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m04:/home/docker/cp-test.txt ha-487903-m03:/home/docker/cp-test_ha-487903-m04_ha-487903-m03.txt              โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m03 sudo cat /home/docker/cp-test_ha-487903-m04_ha-487903-m03.txt                                        โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ node    โ”‚ ha-487903 node stop m02 --alsologtostderr -v 5                                                                                      โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:53 UTC โ”‚
	โ”‚ node    โ”‚ ha-487903 node start m02 --alsologtostderr -v 5                                                                                     โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:53 UTC โ”‚ 19 Nov 25 22:54 UTC โ”‚
	โ”‚ node    โ”‚ ha-487903 node list --alsologtostderr -v 5                                                                                          โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:54 UTC โ”‚                     โ”‚
	โ”‚ stop    โ”‚ ha-487903 stop --alsologtostderr -v 5                                                                                               โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:54 UTC โ”‚ 19 Nov 25 22:58 UTC โ”‚
	โ”‚ start   โ”‚ ha-487903 start --wait true --alsologtostderr -v 5                                                                                  โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:58 UTC โ”‚                     โ”‚
	โ”‚ node    โ”‚ ha-487903 node list --alsologtostderr -v 5                                                                                          โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:05 UTC โ”‚                     โ”‚
	โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:58:56
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:58:56.213053  140883 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:58:56.213329  140883 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:58:56.213337  140883 out.go:374] Setting ErrFile to fd 2...
	I1119 22:58:56.213342  140883 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:58:56.213519  140883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-117497/.minikube/bin
	I1119 22:58:56.213975  140883 out.go:368] Setting JSON to false
	I1119 22:58:56.214867  140883 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":16883,"bootTime":1763576253,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 22:58:56.215026  140883 start.go:143] virtualization: kvm guest
	I1119 22:58:56.217423  140883 out.go:179] * [ha-487903] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 22:58:56.219002  140883 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:58:56.219026  140883 notify.go:221] Checking for updates...
	I1119 22:58:56.221890  140883 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:58:56.223132  140883 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 22:58:56.224328  140883 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-117497/.minikube
	I1119 22:58:56.225456  140883 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 22:58:56.226526  140883 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:58:56.228080  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:58:56.228220  140883 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:58:56.264170  140883 out.go:179] * Using the kvm2 driver based on existing profile
	I1119 22:58:56.265437  140883 start.go:309] selected driver: kvm2
	I1119 22:58:56.265462  140883 start.go:930] validating driver "kvm2" against &{Name:ha-487903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false d
efault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:58:56.265642  140883 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:58:56.266633  140883 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:58:56.266714  140883 cni.go:84] Creating CNI manager for ""
	I1119 22:58:56.266798  140883 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1119 22:58:56.266898  140883 start.go:353] cluster config:
	{Name:ha-487903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:f
alse headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:58:56.267071  140883 iso.go:125] acquiring lock: {Name:mk95e5a645bfae75190ef550e02bd4f48b331040 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:58:56.269538  140883 out.go:179] * Starting "ha-487903" primary control-plane node in "ha-487903" cluster
	I1119 22:58:56.270926  140883 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:58:56.270958  140883 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 22:58:56.270984  140883 cache.go:65] Caching tarball of preloaded images
	I1119 22:58:56.271073  140883 preload.go:238] Found /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 22:58:56.271085  140883 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 22:58:56.271229  140883 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 22:58:56.271448  140883 start.go:360] acquireMachinesLock for ha-487903: {Name:mk31ae761a53f3900bcf08a8c89eb1fc69e23fe3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1119 22:58:56.271493  140883 start.go:364] duration metric: took 26.421ยตs to acquireMachinesLock for "ha-487903"
	I1119 22:58:56.271509  140883 start.go:96] Skipping create...Using existing machine configuration
	I1119 22:58:56.271522  140883 fix.go:54] fixHost starting: 
	I1119 22:58:56.273404  140883 fix.go:112] recreateIfNeeded on ha-487903: state=Stopped err=<nil>
	W1119 22:58:56.273427  140883 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 22:58:56.275031  140883 out.go:252] * Restarting existing kvm2 VM for "ha-487903" ...
	I1119 22:58:56.275084  140883 main.go:143] libmachine: starting domain...
	I1119 22:58:56.275096  140883 main.go:143] libmachine: ensuring networks are active...
	I1119 22:58:56.275845  140883 main.go:143] libmachine: Ensuring network default is active
	I1119 22:58:56.276258  140883 main.go:143] libmachine: Ensuring network mk-ha-487903 is active
	I1119 22:58:56.276731  140883 main.go:143] libmachine: getting domain XML...
	I1119 22:58:56.277856  140883 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>ha-487903</name>
	  <uuid>a1ad91e9-9cee-4f2a-89ce-da034e4410c0</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/ha-487903.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:a9:81:53'/>
	      <source network='mk-ha-487903'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:93:d5:3e'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1119 22:58:57.532843  140883 main.go:143] libmachine: waiting for domain to start...
	I1119 22:58:57.534321  140883 main.go:143] libmachine: domain is now running
	I1119 22:58:57.534360  140883 main.go:143] libmachine: waiting for IP...
	I1119 22:58:57.535171  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:58:57.535745  140883 main.go:143] libmachine: domain ha-487903 has current primary IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:58:57.535758  140883 main.go:143] libmachine: found domain IP: 192.168.39.15
	I1119 22:58:57.535763  140883 main.go:143] libmachine: reserving static IP address...
	I1119 22:58:57.536231  140883 main.go:143] libmachine: found host DHCP lease matching {name: "ha-487903", mac: "52:54:00:a9:81:53", ip: "192.168.39.15"} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:47:36 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:58:57.536255  140883 main.go:143] libmachine: skip adding static IP to network mk-ha-487903 - found existing host DHCP lease matching {name: "ha-487903", mac: "52:54:00:a9:81:53", ip: "192.168.39.15"}
	I1119 22:58:57.536263  140883 main.go:143] libmachine: reserved static IP address 192.168.39.15 for domain ha-487903
	I1119 22:58:57.536269  140883 main.go:143] libmachine: waiting for SSH...
	I1119 22:58:57.536284  140883 main.go:143] libmachine: Getting to WaitForSSH function...
	I1119 22:58:57.538607  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:58:57.538989  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:47:36 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:58:57.539013  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:58:57.539174  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:58:57.539442  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 22:58:57.539453  140883 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1119 22:59:00.588204  140883 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.15:22: connect: no route to host
	I1119 22:59:06.668207  140883 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.15:22: connect: no route to host
	I1119 22:59:09.789580  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:59:09.792830  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:09.793316  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:09.793339  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:09.793640  140883 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 22:59:09.793859  140883 machine.go:94] provisionDockerMachine start ...
	I1119 22:59:09.796160  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:09.796551  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:09.796574  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:09.796736  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:09.796945  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 22:59:09.796957  140883 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:59:09.920535  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1119 22:59:09.920579  140883 buildroot.go:166] provisioning hostname "ha-487903"
	I1119 22:59:09.924026  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:09.924613  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:09.924652  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:09.924920  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:09.925162  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 22:59:09.925179  140883 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-487903 && echo "ha-487903" | sudo tee /etc/hostname
	I1119 22:59:10.075390  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-487903
	
	I1119 22:59:10.078652  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.079199  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:10.079233  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.079435  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:10.079647  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 22:59:10.079675  140883 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-487903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-487903/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-487903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:59:10.221997  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:59:10.222032  140883 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21918-117497/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-117497/.minikube}
	I1119 22:59:10.222082  140883 buildroot.go:174] setting up certificates
	I1119 22:59:10.222102  140883 provision.go:84] configureAuth start
	I1119 22:59:10.225146  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.225685  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:10.225711  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.228217  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.228605  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:10.228627  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.228759  140883 provision.go:143] copyHostCerts
	I1119 22:59:10.228794  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 22:59:10.228835  140883 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem, removing ...
	I1119 22:59:10.228849  140883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 22:59:10.228933  140883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem (1078 bytes)
	I1119 22:59:10.229026  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 22:59:10.229051  140883 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem, removing ...
	I1119 22:59:10.229057  140883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 22:59:10.229096  140883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem (1123 bytes)
	I1119 22:59:10.229160  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 22:59:10.229185  140883 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem, removing ...
	I1119 22:59:10.229189  140883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 22:59:10.229230  140883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem (1675 bytes)
	I1119 22:59:10.229308  140883 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem org=jenkins.ha-487903 san=[127.0.0.1 192.168.39.15 ha-487903 localhost minikube]
	I1119 22:59:10.335910  140883 provision.go:177] copyRemoteCerts
	I1119 22:59:10.335996  140883 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:59:10.338770  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.339269  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:10.339307  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.339538  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:59:10.439975  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1119 22:59:10.440060  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1119 22:59:10.477861  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1119 22:59:10.477964  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 22:59:10.529406  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1119 22:59:10.529472  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 22:59:10.570048  140883 provision.go:87] duration metric: took 347.930624ms to configureAuth
	I1119 22:59:10.570076  140883 buildroot.go:189] setting minikube options for container-runtime
	I1119 22:59:10.570440  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:10.573510  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.573997  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:10.574034  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.574235  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:10.574507  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 22:59:10.574526  140883 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 22:59:10.838912  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 22:59:10.838950  140883 machine.go:97] duration metric: took 1.045075254s to provisionDockerMachine
	I1119 22:59:10.838968  140883 start.go:293] postStartSetup for "ha-487903" (driver="kvm2")
	I1119 22:59:10.838983  140883 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:59:10.839099  140883 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:59:10.842141  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.842656  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:10.842700  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.842857  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:59:10.941042  140883 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:59:10.946128  140883 info.go:137] Remote host: Buildroot 2025.02
	I1119 22:59:10.946154  140883 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/addons for local assets ...
	I1119 22:59:10.946218  140883 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/files for local assets ...
	I1119 22:59:10.946302  140883 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> 1213692.pem in /etc/ssl/certs
	I1119 22:59:10.946321  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /etc/ssl/certs/1213692.pem
	I1119 22:59:10.946415  140883 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:59:10.958665  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 22:59:10.989852  140883 start.go:296] duration metric: took 150.865435ms for postStartSetup
	I1119 22:59:10.989981  140883 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1119 22:59:10.992672  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.993117  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:10.993143  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.993318  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:59:11.080904  140883 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I1119 22:59:11.080983  140883 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1119 22:59:11.124462  140883 fix.go:56] duration metric: took 14.852929829s for fixHost
	I1119 22:59:11.127772  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:11.128299  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:11.128336  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:11.128547  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:11.128846  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 22:59:11.128865  140883 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1119 22:59:11.255539  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763593151.225105539
	
	I1119 22:59:11.255568  140883 fix.go:216] guest clock: 1763593151.225105539
	I1119 22:59:11.255578  140883 fix.go:229] Guest: 2025-11-19 22:59:11.225105539 +0000 UTC Remote: 2025-11-19 22:59:11.124499316 +0000 UTC m=+14.964187528 (delta=100.606223ms)
	I1119 22:59:11.255598  140883 fix.go:200] guest clock delta is within tolerance: 100.606223ms
	I1119 22:59:11.255604  140883 start.go:83] releasing machines lock for "ha-487903", held for 14.984100369s
	I1119 22:59:11.258588  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:11.259028  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:11.259061  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:11.259648  140883 ssh_runner.go:195] Run: cat /version.json
	I1119 22:59:11.259725  140883 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:59:11.262795  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:11.263203  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:11.263243  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:11.263270  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:11.263465  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:59:11.263776  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:11.263809  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:11.264018  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:59:11.345959  140883 ssh_runner.go:195] Run: systemctl --version
	I1119 22:59:11.373994  140883 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 22:59:11.522527  140883 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:59:11.531055  140883 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:59:11.531143  140883 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:59:11.555635  140883 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 22:59:11.555666  140883 start.go:496] detecting cgroup driver to use...
	I1119 22:59:11.555762  140883 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 22:59:11.592696  140883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 22:59:11.617501  140883 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:59:11.617572  140883 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:59:11.636732  140883 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:59:11.654496  140883 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:59:11.811000  140883 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:59:12.032082  140883 docker.go:234] disabling docker service ...
	I1119 22:59:12.032160  140883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:59:12.048543  140883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:59:12.064141  140883 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:59:12.225964  140883 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:59:12.368239  140883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:59:12.384716  140883 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:59:12.408056  140883 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 22:59:12.408120  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:12.421146  140883 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 22:59:12.421223  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:12.434510  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:12.447609  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:12.460732  140883 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:59:12.477217  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:12.489987  140883 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:12.511524  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:12.524517  140883 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:59:12.536463  140883 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1119 22:59:12.536536  140883 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1119 22:59:12.563021  140883 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:59:12.578130  140883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:59:12.729736  140883 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 22:59:12.855038  140883 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 22:59:12.855107  140883 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 22:59:12.860898  140883 start.go:564] Will wait 60s for crictl version
	I1119 22:59:12.860954  140883 ssh_runner.go:195] Run: which crictl
	I1119 22:59:12.865294  140883 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1119 22:59:12.912486  140883 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1119 22:59:12.912590  140883 ssh_runner.go:195] Run: crio --version
	I1119 22:59:12.943910  140883 ssh_runner.go:195] Run: crio --version
	I1119 22:59:12.976663  140883 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1119 22:59:12.980411  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:12.980805  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:12.980827  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:12.981017  140883 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1119 22:59:12.986162  140883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:59:13.003058  140883 kubeadm.go:884] updating cluster {Name:ha-487903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stor
ageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:59:13.003281  140883 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:59:13.003338  140883 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:59:13.047712  140883 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1119 22:59:13.047780  140883 ssh_runner.go:195] Run: which lz4
	I1119 22:59:13.052977  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1119 22:59:13.053081  140883 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1119 22:59:13.058671  140883 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1119 22:59:13.058708  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1119 22:59:14.697348  140883 crio.go:462] duration metric: took 1.644299269s to copy over tarball
	I1119 22:59:14.697449  140883 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1119 22:59:16.447188  140883 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.749702497s)
	I1119 22:59:16.447222  140883 crio.go:469] duration metric: took 1.749848336s to extract the tarball
	I1119 22:59:16.447231  140883 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1119 22:59:16.489289  140883 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:59:16.536108  140883 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:59:16.536132  140883 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:59:16.536140  140883 kubeadm.go:935] updating node { 192.168.39.15 8443 v1.34.1 crio true true} ...
	I1119 22:59:16.536265  140883 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-487903 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:59:16.536328  140883 ssh_runner.go:195] Run: crio config
	I1119 22:59:16.585135  140883 cni.go:84] Creating CNI manager for ""
	I1119 22:59:16.585158  140883 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1119 22:59:16.585181  140883 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:59:16.585202  140883 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.15 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-487903 NodeName:ha-487903 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:59:16.585355  140883 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-487903"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.15"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.15"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:59:16.585375  140883 kube-vip.go:115] generating kube-vip config ...
	I1119 22:59:16.585419  140883 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1119 22:59:16.615712  140883 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1119 22:59:16.615824  140883 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1119 22:59:16.615913  140883 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:59:16.633015  140883 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:59:16.633116  140883 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1119 22:59:16.646138  140883 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1119 22:59:16.668865  140883 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:59:16.691000  140883 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1119 22:59:16.713854  140883 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1119 22:59:16.736483  140883 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1119 22:59:16.741324  140883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:59:16.757055  140883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:59:16.900472  140883 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:59:16.922953  140883 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903 for IP: 192.168.39.15
	I1119 22:59:16.922982  140883 certs.go:195] generating shared ca certs ...
	I1119 22:59:16.922999  140883 certs.go:227] acquiring lock for ca certs: {Name:mk7fd1f1adfef6333505c39c1a982465562c820e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:59:16.923147  140883 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key
	I1119 22:59:16.923233  140883 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key
	I1119 22:59:16.923245  140883 certs.go:257] generating profile certs ...
	I1119 22:59:16.923340  140883 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key
	I1119 22:59:16.923369  140883 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key.4e6b2c30
	I1119 22:59:16.923388  140883 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt.4e6b2c30 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.15 192.168.39.191 192.168.39.160 192.168.39.254]
	I1119 22:59:17.222295  140883 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt.4e6b2c30 ...
	I1119 22:59:17.222330  140883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt.4e6b2c30: {Name:mk1efb8fb5e10ff1c6bc1bceec2ebc4b1a4cdce5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:59:17.222507  140883 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key.4e6b2c30 ...
	I1119 22:59:17.222521  140883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key.4e6b2c30: {Name:mk99b1381f2cff273ee01fc482a9705b00bd6fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:59:17.222598  140883 certs.go:382] copying /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt.4e6b2c30 -> /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt
	I1119 22:59:17.224167  140883 certs.go:386] copying /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key.4e6b2c30 -> /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key
	I1119 22:59:17.227659  140883 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key
	I1119 22:59:17.227687  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1119 22:59:17.227700  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1119 22:59:17.227711  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1119 22:59:17.227725  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1119 22:59:17.227746  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1119 22:59:17.227763  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1119 22:59:17.227778  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1119 22:59:17.227791  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1119 22:59:17.227853  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem (1338 bytes)
	W1119 22:59:17.227922  140883 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369_empty.pem, impossibly tiny 0 bytes
	I1119 22:59:17.227938  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 22:59:17.227968  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem (1078 bytes)
	I1119 22:59:17.228003  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:59:17.228035  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem (1675 bytes)
	I1119 22:59:17.228085  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 22:59:17.228122  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /usr/share/ca-certificates/1213692.pem
	I1119 22:59:17.228146  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:59:17.228164  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem -> /usr/share/ca-certificates/121369.pem
	I1119 22:59:17.228751  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:59:17.267457  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 22:59:17.301057  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:59:17.334334  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:59:17.369081  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 22:59:17.401525  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 22:59:17.435168  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:59:17.468258  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 22:59:17.501844  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /usr/share/ca-certificates/1213692.pem (1708 bytes)
	I1119 22:59:17.535729  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:59:17.568773  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem --> /usr/share/ca-certificates/121369.pem (1338 bytes)
	I1119 22:59:17.602167  140883 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:59:17.625050  140883 ssh_runner.go:195] Run: openssl version
	I1119 22:59:17.631971  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121369.pem && ln -fs /usr/share/ca-certificates/121369.pem /etc/ssl/certs/121369.pem"
	I1119 22:59:17.646313  140883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121369.pem
	I1119 22:59:17.652083  140883 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:54 /usr/share/ca-certificates/121369.pem
	I1119 22:59:17.652141  140883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121369.pem
	I1119 22:59:17.660153  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/121369.pem /etc/ssl/certs/51391683.0"
	I1119 22:59:17.675854  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1213692.pem && ln -fs /usr/share/ca-certificates/1213692.pem /etc/ssl/certs/1213692.pem"
	I1119 22:59:17.691421  140883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1213692.pem
	I1119 22:59:17.697623  140883 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:54 /usr/share/ca-certificates/1213692.pem
	I1119 22:59:17.697704  140883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1213692.pem
	I1119 22:59:17.706162  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1213692.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:59:17.721477  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:59:17.736953  140883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:59:17.743111  140883 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:59:17.743185  140883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:59:17.751321  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:59:17.766690  140883 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:59:17.773200  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 22:59:17.781700  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 22:59:17.790000  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 22:59:17.798411  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 22:59:17.807029  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 22:59:17.815374  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 22:59:17.823385  140883 kubeadm.go:401] StartCluster: {Name:ha-487903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Clust
erName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:59:17.823559  140883 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 22:59:17.823640  140883 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:59:17.865475  140883 cri.go:89] found id: ""
	I1119 22:59:17.865542  140883 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:59:17.879260  140883 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 22:59:17.879283  140883 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 22:59:17.879329  140883 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 22:59:17.892209  140883 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 22:59:17.892673  140883 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-487903" does not appear in /home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 22:59:17.892815  140883 kubeconfig.go:62] /home/jenkins/minikube-integration/21918-117497/kubeconfig needs updating (will repair): [kubeconfig missing "ha-487903" cluster setting kubeconfig missing "ha-487903" context setting]
	I1119 22:59:17.893101  140883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/kubeconfig: {Name:mk079b5c589536bd895a38d6eaf0adbccf891fc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:59:17.963155  140883 kapi.go:59] client config for ha-487903: &rest.Config{Host:"https://192.168.39.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key", CAFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1119 22:59:17.963616  140883 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1119 22:59:17.963633  140883 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1119 22:59:17.963638  140883 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1119 22:59:17.963643  140883 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1119 22:59:17.963647  140883 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1119 22:59:17.963661  140883 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1119 22:59:17.964167  140883 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 22:59:17.978087  140883 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.15
	I1119 22:59:17.978113  140883 kubeadm.go:602] duration metric: took 98.825282ms to restartPrimaryControlPlane
	I1119 22:59:17.978123  140883 kubeadm.go:403] duration metric: took 154.749827ms to StartCluster
	I1119 22:59:17.978140  140883 settings.go:142] acquiring lock: {Name:mk7bf46f049c1d627501587bc2954f8687f12cc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:59:17.978206  140883 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 22:59:17.978813  140883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/kubeconfig: {Name:mk079b5c589536bd895a38d6eaf0adbccf891fc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:59:18.091922  140883 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:59:18.091962  140883 start.go:242] waiting for startup goroutines ...
	I1119 22:59:18.091978  140883 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:59:18.092251  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:18.092350  140883 cache.go:107] acquiring lock: {Name:mk7073b8eca670d2c11dece9275947688dc7c859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:59:18.092431  140883 cache.go:115] /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1119 22:59:18.092446  140883 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 107.182ยตs
	I1119 22:59:18.092458  140883 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1119 22:59:18.092470  140883 cache.go:87] Successfully saved all images to host disk.
	I1119 22:59:18.092656  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:18.094431  140883 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:59:18.096799  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:18.097214  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:18.097237  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:18.097408  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:59:18.112697  140883 out.go:179] * Enabled addons: 
	I1119 22:59:18.214740  140883 crio.go:510] couldn't find preloaded image for "registry.k8s.io/pause:3.1". assuming images are not preloaded.
	I1119 22:59:18.214767  140883 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/pause:3.1]
	I1119 22:59:18.214826  140883 image.go:138] retrieving image: registry.k8s.io/pause:3.1
	I1119 22:59:18.216192  140883 image.go:181] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1119 22:59:18.232424  140883 addons.go:515] duration metric: took 140.445828ms for enable addons: enabled=[]
	I1119 22:59:18.373430  140883 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1119 22:59:18.424783  140883 cache_images.go:118] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1119 22:59:18.424834  140883 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1119 22:59:18.424902  140883 ssh_runner.go:195] Run: which crictl
	I1119 22:59:18.429834  140883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1119 22:59:18.468042  140883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1119 22:59:18.506995  140883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1119 22:59:18.543506  140883 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1119 22:59:18.543547  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 -> /var/lib/minikube/images/pause_3.1
	I1119 22:59:18.543609  140883 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.1
	I1119 22:59:18.549496  140883 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1119 22:59:18.549518  140883 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.1
	I1119 22:59:18.549580  140883 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1119 22:59:19.599140  140883 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.049525714s)
	I1119 22:59:19.599189  140883 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1119 22:59:19.599233  140883 cache_images.go:125] Successfully loaded all cached images
	I1119 22:59:19.599247  140883 cache_images.go:94] duration metric: took 1.384470883s to LoadCachedImages
	I1119 22:59:19.604407  140883 cache_images.go:264] succeeded pushing to: ha-487903
	I1119 22:59:19.604463  140883 start.go:247] waiting for cluster config update ...
	I1119 22:59:19.604477  140883 start.go:256] writing updated cluster config ...
	I1119 22:59:19.606572  140883 out.go:203] 
	I1119 22:59:19.608121  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:19.608254  140883 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 22:59:19.609863  140883 out.go:179] * Starting "ha-487903-m02" control-plane node in "ha-487903" cluster
	I1119 22:59:19.611047  140883 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:59:19.611074  140883 cache.go:65] Caching tarball of preloaded images
	I1119 22:59:19.611204  140883 preload.go:238] Found /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 22:59:19.611221  140883 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 22:59:19.611355  140883 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 22:59:19.611616  140883 start.go:360] acquireMachinesLock for ha-487903-m02: {Name:mk31ae761a53f3900bcf08a8c89eb1fc69e23fe3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1119 22:59:19.611685  140883 start.go:364] duration metric: took 40.215ยตs to acquireMachinesLock for "ha-487903-m02"
	I1119 22:59:19.611709  140883 start.go:96] Skipping create...Using existing machine configuration
	I1119 22:59:19.611717  140883 fix.go:54] fixHost starting: m02
	I1119 22:59:19.613410  140883 fix.go:112] recreateIfNeeded on ha-487903-m02: state=Stopped err=<nil>
	W1119 22:59:19.613431  140883 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 22:59:19.615098  140883 out.go:252] * Restarting existing kvm2 VM for "ha-487903-m02" ...
	I1119 22:59:19.615142  140883 main.go:143] libmachine: starting domain...
	I1119 22:59:19.615165  140883 main.go:143] libmachine: ensuring networks are active...
	I1119 22:59:19.616007  140883 main.go:143] libmachine: Ensuring network default is active
	I1119 22:59:19.616405  140883 main.go:143] libmachine: Ensuring network mk-ha-487903 is active
	I1119 22:59:19.616894  140883 main.go:143] libmachine: getting domain XML...
	I1119 22:59:19.618210  140883 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>ha-487903-m02</name>
	  <uuid>dcc51fc7-a2ff-40ae-988d-da36299d6bbc</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/ha-487903-m02.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:04:d5:70'/>
	      <source network='mk-ha-487903'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:9b:1d:f0'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1119 22:59:20.934173  140883 main.go:143] libmachine: waiting for domain to start...
	I1119 22:59:20.935608  140883 main.go:143] libmachine: domain is now running
	I1119 22:59:20.935632  140883 main.go:143] libmachine: waiting for IP...
	I1119 22:59:20.936409  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:20.936918  140883 main.go:143] libmachine: domain ha-487903-m02 has current primary IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:20.936934  140883 main.go:143] libmachine: found domain IP: 192.168.39.191
	I1119 22:59:20.936940  140883 main.go:143] libmachine: reserving static IP address...
	I1119 22:59:20.937407  140883 main.go:143] libmachine: found host DHCP lease matching {name: "ha-487903-m02", mac: "52:54:00:04:d5:70", ip: "192.168.39.191"} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:54:10 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:20.937433  140883 main.go:143] libmachine: skip adding static IP to network mk-ha-487903 - found existing host DHCP lease matching {name: "ha-487903-m02", mac: "52:54:00:04:d5:70", ip: "192.168.39.191"}
	I1119 22:59:20.937445  140883 main.go:143] libmachine: reserved static IP address 192.168.39.191 for domain ha-487903-m02
	I1119 22:59:20.937450  140883 main.go:143] libmachine: waiting for SSH...
	I1119 22:59:20.937455  140883 main.go:143] libmachine: Getting to WaitForSSH function...
	I1119 22:59:20.939837  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:20.940340  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:54:10 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:20.940366  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:20.940532  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:20.940720  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 22:59:20.940730  140883 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1119 22:59:24.012154  140883 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I1119 22:59:30.092142  140883 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I1119 22:59:33.096094  140883 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: connection refused
	I1119 22:59:36.206633  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:59:36.209899  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.210391  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:36.210411  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.210732  140883 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 22:59:36.211013  140883 machine.go:94] provisionDockerMachine start ...
	I1119 22:59:36.213527  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.213977  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:36.214002  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.214194  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:36.214405  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 22:59:36.214418  140883 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:59:36.326740  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1119 22:59:36.326781  140883 buildroot.go:166] provisioning hostname "ha-487903-m02"
	I1119 22:59:36.329446  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.329910  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:36.329941  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.330096  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:36.330305  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 22:59:36.330321  140883 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-487903-m02 && echo "ha-487903-m02" | sudo tee /etc/hostname
	I1119 22:59:36.457310  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-487903-m02
	
	I1119 22:59:36.460161  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.460619  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:36.460649  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.460898  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:36.461143  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 22:59:36.461164  140883 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-487903-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-487903-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-487903-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:59:36.581648  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:59:36.581677  140883 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21918-117497/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-117497/.minikube}
	I1119 22:59:36.581693  140883 buildroot.go:174] setting up certificates
	I1119 22:59:36.581705  140883 provision.go:84] configureAuth start
	I1119 22:59:36.585049  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.585711  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:36.585755  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.588067  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.588494  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:36.588521  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.588646  140883 provision.go:143] copyHostCerts
	I1119 22:59:36.588674  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 22:59:36.588706  140883 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem, removing ...
	I1119 22:59:36.588714  140883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 22:59:36.588769  140883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem (1675 bytes)
	I1119 22:59:36.588842  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 22:59:36.588860  140883 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem, removing ...
	I1119 22:59:36.588866  140883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 22:59:36.588903  140883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem (1078 bytes)
	I1119 22:59:36.589025  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 22:59:36.589050  140883 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem, removing ...
	I1119 22:59:36.589057  140883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 22:59:36.589079  140883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem (1123 bytes)
	I1119 22:59:36.589147  140883 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem org=jenkins.ha-487903-m02 san=[127.0.0.1 192.168.39.191 ha-487903-m02 localhost minikube]
	I1119 22:59:36.826031  140883 provision.go:177] copyRemoteCerts
	I1119 22:59:36.826092  140883 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:59:36.828610  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.829058  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:36.829082  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.829236  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 22:59:36.914853  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1119 22:59:36.914951  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 22:59:36.947443  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1119 22:59:36.947526  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1119 22:59:36.979006  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1119 22:59:36.979097  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 22:59:37.010933  140883 provision.go:87] duration metric: took 429.212672ms to configureAuth
	I1119 22:59:37.010966  140883 buildroot.go:189] setting minikube options for container-runtime
	I1119 22:59:37.011249  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:37.014321  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.014846  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:37.014890  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.015134  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:37.015408  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 22:59:37.015434  140883 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 22:59:37.258599  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 22:59:37.258634  140883 machine.go:97] duration metric: took 1.047602081s to provisionDockerMachine
	I1119 22:59:37.258649  140883 start.go:293] postStartSetup for "ha-487903-m02" (driver="kvm2")
	I1119 22:59:37.258662  140883 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:59:37.258718  140883 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:59:37.261730  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.262218  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:37.262247  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.262427  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 22:59:37.348416  140883 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:59:37.353564  140883 info.go:137] Remote host: Buildroot 2025.02
	I1119 22:59:37.353602  140883 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/addons for local assets ...
	I1119 22:59:37.353676  140883 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/files for local assets ...
	I1119 22:59:37.353750  140883 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> 1213692.pem in /etc/ssl/certs
	I1119 22:59:37.353760  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /etc/ssl/certs/1213692.pem
	I1119 22:59:37.353845  140883 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:59:37.366805  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 22:59:37.398923  140883 start.go:296] duration metric: took 140.253592ms for postStartSetup
	I1119 22:59:37.399023  140883 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1119 22:59:37.401945  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.402392  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:37.402417  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.402579  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 22:59:37.493849  140883 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I1119 22:59:37.493957  140883 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1119 22:59:37.556929  140883 fix.go:56] duration metric: took 17.945204618s for fixHost
	I1119 22:59:37.560155  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.560693  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:37.560730  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.560998  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:37.561206  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 22:59:37.561217  140883 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1119 22:59:37.681336  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763593177.645225217
	
	I1119 22:59:37.681361  140883 fix.go:216] guest clock: 1763593177.645225217
	I1119 22:59:37.681369  140883 fix.go:229] Guest: 2025-11-19 22:59:37.645225217 +0000 UTC Remote: 2025-11-19 22:59:37.55695737 +0000 UTC m=+41.396645577 (delta=88.267847ms)
	I1119 22:59:37.681385  140883 fix.go:200] guest clock delta is within tolerance: 88.267847ms
	I1119 22:59:37.681391  140883 start.go:83] releasing machines lock for "ha-487903-m02", held for 18.069691628s
	I1119 22:59:37.684191  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.684592  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:37.684617  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.686578  140883 out.go:179] * Found network options:
	I1119 22:59:37.687756  140883 out.go:179]   - NO_PROXY=192.168.39.15
	W1119 22:59:37.688974  140883 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 22:59:37.689312  140883 proxy.go:120] fail to check proxy env: Error ip not in block
	I1119 22:59:37.689391  140883 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 22:59:37.689429  140883 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:59:37.692557  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.692674  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.693088  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:37.693119  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.693175  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:37.693199  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.693322  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 22:59:37.693519  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 22:59:37.917934  140883 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:59:37.925533  140883 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:59:37.925625  140883 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:59:37.946707  140883 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 22:59:37.946736  140883 start.go:496] detecting cgroup driver to use...
	I1119 22:59:37.946815  140883 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 22:59:37.972689  140883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 22:59:37.990963  140883 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:59:37.991033  140883 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:59:38.008725  140883 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:59:38.025289  140883 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:59:38.180260  140883 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:59:38.396498  140883 docker.go:234] disabling docker service ...
	I1119 22:59:38.396561  140883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:59:38.413974  140883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:59:38.429993  140883 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:59:38.600828  140883 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:59:38.743771  140883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:59:38.761006  140883 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:59:38.784784  140883 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 22:59:38.784849  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:38.797617  140883 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 22:59:38.797682  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:38.810823  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:38.824064  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:38.837310  140883 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:59:38.851169  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:38.864106  140883 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:38.884838  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:38.897998  140883 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:59:38.909976  140883 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1119 22:59:38.910055  140883 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1119 22:59:38.933644  140883 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:59:38.947853  140883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:59:39.102667  140883 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 22:59:39.237328  140883 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 22:59:39.237425  140883 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 22:59:39.244051  140883 start.go:564] Will wait 60s for crictl version
	I1119 22:59:39.244122  140883 ssh_runner.go:195] Run: which crictl
	I1119 22:59:39.248522  140883 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1119 22:59:39.290126  140883 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1119 22:59:39.290249  140883 ssh_runner.go:195] Run: crio --version
	I1119 22:59:39.321443  140883 ssh_runner.go:195] Run: crio --version
	I1119 22:59:39.354869  140883 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1119 22:59:39.356230  140883 out.go:179]   - env NO_PROXY=192.168.39.15
	I1119 22:59:39.359840  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:39.360302  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:39.360329  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:39.360492  140883 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1119 22:59:39.365473  140883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:59:39.382234  140883 mustload.go:66] Loading cluster: ha-487903
	I1119 22:59:39.382499  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:39.384228  140883 host.go:66] Checking if "ha-487903" exists ...
	I1119 22:59:39.384424  140883 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903 for IP: 192.168.39.191
	I1119 22:59:39.384434  140883 certs.go:195] generating shared ca certs ...
	I1119 22:59:39.384461  140883 certs.go:227] acquiring lock for ca certs: {Name:mk7fd1f1adfef6333505c39c1a982465562c820e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:59:39.384590  140883 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key
	I1119 22:59:39.384635  140883 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key
	I1119 22:59:39.384645  140883 certs.go:257] generating profile certs ...
	I1119 22:59:39.384719  140883 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key
	I1119 22:59:39.384773  140883 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key.4e640f1f
	I1119 22:59:39.384805  140883 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key
	I1119 22:59:39.384819  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1119 22:59:39.384832  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1119 22:59:39.384842  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1119 22:59:39.384852  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1119 22:59:39.384862  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1119 22:59:39.384884  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1119 22:59:39.384898  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1119 22:59:39.384910  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1119 22:59:39.384960  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem (1338 bytes)
	W1119 22:59:39.384991  140883 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369_empty.pem, impossibly tiny 0 bytes
	I1119 22:59:39.385000  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 22:59:39.385020  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem (1078 bytes)
	I1119 22:59:39.385051  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:59:39.385085  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem (1675 bytes)
	I1119 22:59:39.385135  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 22:59:39.385162  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem -> /usr/share/ca-certificates/121369.pem
	I1119 22:59:39.385175  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /usr/share/ca-certificates/1213692.pem
	I1119 22:59:39.385187  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:59:39.387504  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:39.387909  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:39.387931  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:39.388082  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:59:39.466324  140883 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1119 22:59:39.472238  140883 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1119 22:59:39.488702  140883 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1119 22:59:39.494809  140883 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1119 22:59:39.508912  140883 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1119 22:59:39.514176  140883 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1119 22:59:39.528869  140883 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1119 22:59:39.534452  140883 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1119 22:59:39.547082  140883 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1119 22:59:39.552126  140883 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1119 22:59:39.565359  140883 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1119 22:59:39.570444  140883 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1119 22:59:39.583069  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:59:39.616063  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 22:59:39.649495  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:59:39.681835  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:59:39.717138  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 22:59:39.749821  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 22:59:39.782202  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:59:39.813910  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 22:59:39.846088  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem --> /usr/share/ca-certificates/121369.pem (1338 bytes)
	I1119 22:59:39.879164  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /usr/share/ca-certificates/1213692.pem (1708 bytes)
	I1119 22:59:39.911376  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:59:39.944000  140883 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1119 22:59:39.967306  140883 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1119 22:59:39.990350  140883 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1119 22:59:40.013301  140883 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1119 22:59:40.037246  140883 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1119 22:59:40.060715  140883 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1119 22:59:40.086564  140883 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1119 22:59:40.109633  140883 ssh_runner.go:195] Run: openssl version
	I1119 22:59:40.116643  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1213692.pem && ln -fs /usr/share/ca-certificates/1213692.pem /etc/ssl/certs/1213692.pem"
	I1119 22:59:40.130925  140883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1213692.pem
	I1119 22:59:40.136232  140883 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:54 /usr/share/ca-certificates/1213692.pem
	I1119 22:59:40.136304  140883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1213692.pem
	I1119 22:59:40.144024  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1213692.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:59:40.158532  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:59:40.172833  140883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:59:40.178370  140883 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:59:40.178436  140883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:59:40.186337  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:59:40.203312  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121369.pem && ln -fs /usr/share/ca-certificates/121369.pem /etc/ssl/certs/121369.pem"
	I1119 22:59:40.218899  140883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121369.pem
	I1119 22:59:40.224542  140883 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:54 /usr/share/ca-certificates/121369.pem
	I1119 22:59:40.224604  140883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121369.pem
	I1119 22:59:40.232341  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/121369.pem /etc/ssl/certs/51391683.0"
	I1119 22:59:40.246745  140883 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:59:40.252498  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 22:59:40.260177  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 22:59:40.267967  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 22:59:40.275670  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 22:59:40.282924  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 22:59:40.290564  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 22:59:40.297918  140883 kubeadm.go:935] updating node {m02 192.168.39.191 8443 v1.34.1 crio true true} ...
	I1119 22:59:40.298017  140883 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-487903-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.191
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:59:40.298041  140883 kube-vip.go:115] generating kube-vip config ...
	I1119 22:59:40.298079  140883 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1119 22:59:40.326946  140883 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1119 22:59:40.327021  140883 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1119 22:59:40.327086  140883 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:59:40.341513  140883 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:59:40.341602  140883 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1119 22:59:40.355326  140883 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1119 22:59:40.377667  140883 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:59:40.398672  140883 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1119 22:59:40.420213  140883 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1119 22:59:40.424583  140883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:59:40.440499  140883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:59:40.591016  140883 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:59:40.625379  140883 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:59:40.625713  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:40.625790  140883 cache.go:107] acquiring lock: {Name:mk7073b8eca670d2c11dece9275947688dc7c859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:59:40.625904  140883 cache.go:115] /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1119 22:59:40.625917  140883 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 136.601ยตs
	I1119 22:59:40.625925  140883 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1119 22:59:40.625932  140883 cache.go:87] Successfully saved all images to host disk.
	I1119 22:59:40.626121  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:40.628127  140883 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:59:40.628799  140883 out.go:179] * Verifying Kubernetes components...
	I1119 22:59:40.630163  140883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:59:40.631018  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:40.631564  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:40.631591  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:40.631793  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:59:40.839364  140883 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:59:40.839885  140883 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:59:40.839904  140883 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:59:40.842028  140883 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:59:40.844857  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:40.845326  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:40.845355  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:40.845505  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 22:59:40.874910  140883 kapi.go:59] client config for ha-487903: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key", CAFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1119 22:59:40.875019  140883 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.15:8443
	I1119 22:59:40.875298  140883 node_ready.go:35] waiting up to 6m0s for node "ha-487903-m02" to be "Ready" ...
	I1119 22:59:41.009083  140883 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:59:41.009116  140883 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:59:41.012633  140883 cache_images.go:264] succeeded pushing to: ha-487903 ha-487903-m02
	W1119 22:59:42.877100  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 22:59:45.376962  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 22:59:47.876896  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 22:59:50.376621  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 22:59:52.876228  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 22:59:54.876713  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 22:59:57.376562  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 22:59:59.377033  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:01.876165  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:03.876776  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:06.376476  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:08.377072  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:10.876811  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:13.377002  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:15.876574  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:18.376735  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:20.876598  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:23.376586  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:25.876567  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:28.376801  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:30.876662  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:32.877033  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:35.376969  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:37.876252  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:39.876923  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:42.376913  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:44.876958  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:47.376642  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:49.376843  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:51.876453  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:54.376840  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:56.876275  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:58.876988  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:01.376794  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:03.876458  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:05.876958  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:08.377013  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:10.876929  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:13.376449  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:15.376525  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:17.376595  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:19.876563  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:22.376426  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:24.376713  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:26.376833  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:28.876508  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:30.876839  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:32.877099  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:35.377181  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:37.876444  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:39.877103  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:42.376270  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:44.376706  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:46.876968  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:49.376048  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:51.376320  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:53.376410  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:55.376492  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:57.876667  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:59.876765  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:02.376272  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:04.376314  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:06.376754  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:08.877042  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:11.376455  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:13.376516  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:15.376798  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:17.377028  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:19.876336  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:22.376568  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:24.876573  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:27.376833  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:29.876498  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:31.876574  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:34.376621  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:36.376848  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:38.877032  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:41.376174  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:43.377081  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:45.876182  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:48.376695  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:50.876795  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:52.876945  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:55.377185  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:57.876221  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:59.876403  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:01.876497  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:04.376656  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:06.376862  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:08.876459  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:11.376839  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:13.877025  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:16.376692  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:18.377137  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:20.876509  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:22.876756  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:25.377113  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:27.876224  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:29.876784  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:31.877072  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:34.376176  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:36.876300  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:38.876430  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:41.376746  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:43.876333  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:46.376700  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:48.376946  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:50.376983  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:52.876447  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:55.376396  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:57.876511  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:00.376395  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:02.376931  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:04.876111  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:06.876594  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:09.376360  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:11.377039  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:13.876739  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:15.876954  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:18.376383  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:20.376729  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:22.877074  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:25.376970  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:27.377128  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:29.876794  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:32.376301  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:34.876592  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:37.376959  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:39.876221  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:41.876361  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:43.876530  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:45.877025  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:48.376319  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:50.376838  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:52.376995  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:54.876470  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:56.876815  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:59.376319  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:01.376715  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:03.376967  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:05.876573  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:07.877236  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:10.376866  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:12.876650  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:15.376722  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:17.876558  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:20.376338  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:22.876342  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:25.376231  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:27.876076  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:30.376254  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:32.376759  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:34.876778  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:37.376154  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:39.377025  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	I1119 23:05:40.875796  140883 node_ready.go:38] duration metric: took 6m0.000466447s for node "ha-487903-m02" to be "Ready" ...
	I1119 23:05:40.877762  140883 out.go:203] 
	W1119 23:05:40.878901  140883 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1119 23:05:40.878921  140883 out.go:285] * 
	W1119 23:05:40.880854  140883 out.go:308] โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
	โ”‚                                                                                             โ”‚
	โ”‚    * If the above advice does not help, please let us know:                                 โ”‚
	โ”‚      https://github.com/kubernetes/minikube/issues/new/choose                               โ”‚
	โ”‚                                                                                             โ”‚
	โ”‚    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    โ”‚
	โ”‚                                                                                             โ”‚
	โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
	I1119 23:05:40.882387  140883 out.go:203] 
	
	
	==> CRI-O <==
	Nov 19 23:05:41 ha-487903 crio[1097]: time="2025-11-19 23:05:41.561213930Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763593541561188224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147868,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b4539906-5b20-4582-bba1-b73690082a5f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:05:41 ha-487903 crio[1097]: time="2025-11-19 23:05:41.561893405Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f4c109ca-d9e1-4ccf-9a03-229da90334e6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:41 ha-487903 crio[1097]: time="2025-11-19 23:05:41.561946555Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f4c109ca-d9e1-4ccf-9a03-229da90334e6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:41 ha-487903 crio[1097]: time="2025-11-19 23:05:41.562017595Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b7e08202a351cd56ba9649d7ab5ca6f63a8642b11fbaa795510df9b62430ab9,PodSandboxId:4ac5d7afe9e29dd583edf8963dd85b20c4a56a64ecff695db46f457ad0225861,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce5ff7916975f8df7d464b4e56a967792442b9e79944fd74684cc749b281df38,State:CONTAINER_RUNNING,CreatedAt:1763593161042894056,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2660d05a38d7f409ed63a1278c85d94,},Annotations:map[string]string{io.kubernetes.container.hash: 7c465d42,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f4c109ca-d9e1-4ccf-9a03-229da90334e6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:41 ha-487903 crio[1097]: time="2025-11-19 23:05:41.596950277Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d5db6297-9264-46a5-bb40-7e0ce84da659 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:05:41 ha-487903 crio[1097]: time="2025-11-19 23:05:41.597030994Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d5db6297-9264-46a5-bb40-7e0ce84da659 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:05:41 ha-487903 crio[1097]: time="2025-11-19 23:05:41.598569724Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=980dd94b-a6b2-4db0-9d8e-122bdf68dcb9 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:05:41 ha-487903 crio[1097]: time="2025-11-19 23:05:41.599037351Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763593541599016969,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147868,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=980dd94b-a6b2-4db0-9d8e-122bdf68dcb9 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:05:41 ha-487903 crio[1097]: time="2025-11-19 23:05:41.600884460Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=325e3870-407d-4682-b393-2f2c51550931 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:41 ha-487903 crio[1097]: time="2025-11-19 23:05:41.600974052Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=325e3870-407d-4682-b393-2f2c51550931 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:41 ha-487903 crio[1097]: time="2025-11-19 23:05:41.601029044Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b7e08202a351cd56ba9649d7ab5ca6f63a8642b11fbaa795510df9b62430ab9,PodSandboxId:4ac5d7afe9e29dd583edf8963dd85b20c4a56a64ecff695db46f457ad0225861,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce5ff7916975f8df7d464b4e56a967792442b9e79944fd74684cc749b281df38,State:CONTAINER_RUNNING,CreatedAt:1763593161042894056,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2660d05a38d7f409ed63a1278c85d94,},Annotations:map[string]string{io.kubernetes.container.hash: 7c465d42,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=325e3870-407d-4682-b393-2f2c51550931 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:41 ha-487903 crio[1097]: time="2025-11-19 23:05:41.638626875Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=09f168a9-cad0-45ff-b166-b5d544912212 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:05:41 ha-487903 crio[1097]: time="2025-11-19 23:05:41.638853571Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=09f168a9-cad0-45ff-b166-b5d544912212 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:05:41 ha-487903 crio[1097]: time="2025-11-19 23:05:41.639806087Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=208d0e72-7a39-4682-a347-74ceb9ee305c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:05:41 ha-487903 crio[1097]: time="2025-11-19 23:05:41.640334743Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763593541640292969,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147868,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=208d0e72-7a39-4682-a347-74ceb9ee305c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:05:41 ha-487903 crio[1097]: time="2025-11-19 23:05:41.641094374Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=501d7221-f3a1-453b-a224-81f62e8e24cc name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:41 ha-487903 crio[1097]: time="2025-11-19 23:05:41.641183763Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=501d7221-f3a1-453b-a224-81f62e8e24cc name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:41 ha-487903 crio[1097]: time="2025-11-19 23:05:41.641243496Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b7e08202a351cd56ba9649d7ab5ca6f63a8642b11fbaa795510df9b62430ab9,PodSandboxId:4ac5d7afe9e29dd583edf8963dd85b20c4a56a64ecff695db46f457ad0225861,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce5ff7916975f8df7d464b4e56a967792442b9e79944fd74684cc749b281df38,State:CONTAINER_RUNNING,CreatedAt:1763593161042894056,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2660d05a38d7f409ed63a1278c85d94,},Annotations:map[string]string{io.kubernetes.container.hash: 7c465d42,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=501d7221-f3a1-453b-a224-81f62e8e24cc name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:41 ha-487903 crio[1097]: time="2025-11-19 23:05:41.677800570Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dadfb37d-865a-4a84-8c97-584e3353b4d5 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:05:41 ha-487903 crio[1097]: time="2025-11-19 23:05:41.677883101Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dadfb37d-865a-4a84-8c97-584e3353b4d5 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:05:41 ha-487903 crio[1097]: time="2025-11-19 23:05:41.679356856Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=17ef5a19-2231-465a-9dd7-669d83fc2b39 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:05:41 ha-487903 crio[1097]: time="2025-11-19 23:05:41.679896595Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763593541679872357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147868,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=17ef5a19-2231-465a-9dd7-669d83fc2b39 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:05:41 ha-487903 crio[1097]: time="2025-11-19 23:05:41.680525844Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=500054ea-779b-4319-abdb-85db55715aab name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:41 ha-487903 crio[1097]: time="2025-11-19 23:05:41.680590498Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=500054ea-779b-4319-abdb-85db55715aab name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:41 ha-487903 crio[1097]: time="2025-11-19 23:05:41.680650513Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b7e08202a351cd56ba9649d7ab5ca6f63a8642b11fbaa795510df9b62430ab9,PodSandboxId:4ac5d7afe9e29dd583edf8963dd85b20c4a56a64ecff695db46f457ad0225861,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce5ff7916975f8df7d464b4e56a967792442b9e79944fd74684cc749b281df38,State:CONTAINER_RUNNING,CreatedAt:1763593161042894056,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2660d05a38d7f409ed63a1278c85d94,},Annotations:map[string]string{io.kubernetes.container.hash: 7c465d42,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=500054ea-779b-4319-abdb-85db55715aab name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	6b7e08202a351       ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178   6 minutes ago       Running             kube-vip            0                   4ac5d7afe9e29       kube-vip-ha-487903
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1119 23:05:41.831630    1766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E1119 23:05:41.832264    1766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E1119 23:05:41.833841    1766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E1119 23:05:41.834358    1766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E1119 23:05:41.836029    1766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Nov19 22:58] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000000] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[Nov19 22:59] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002202] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.908092] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.107363] kauditd_printk_skb: 88 callbacks suppressed
	[  +0.029379] kauditd_printk_skb: 142 callbacks suppressed
	
	
	==> kernel <==
	 23:05:41 up 6 min,  0 users,  load average: 0.00, 0.06, 0.04
	Linux ha-487903 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 21:15:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kubelet <==
	Nov 19 23:05:39 ha-487903 kubelet[1241]: E1119 23:05:39.839277    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:39 ha-487903 kubelet[1241]: E1119 23:05:39.940628    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:40 ha-487903 kubelet[1241]: E1119 23:05:40.041836    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:40 ha-487903 kubelet[1241]: E1119 23:05:40.142942    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:40 ha-487903 kubelet[1241]: E1119 23:05:40.243691    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:40 ha-487903 kubelet[1241]: E1119 23:05:40.344611    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:40 ha-487903 kubelet[1241]: E1119 23:05:40.445813    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:40 ha-487903 kubelet[1241]: E1119 23:05:40.547302    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:40 ha-487903 kubelet[1241]: E1119 23:05:40.648806    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:40 ha-487903 kubelet[1241]: E1119 23:05:40.706749    1241 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.15:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-487903?timeout=10s\": dial tcp 192.168.39.15:8443: connect: connection refused" interval="7s"
	Nov 19 23:05:40 ha-487903 kubelet[1241]: E1119 23:05:40.749997    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:40 ha-487903 kubelet[1241]: E1119 23:05:40.851029    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:40 ha-487903 kubelet[1241]: I1119 23:05:40.929668    1241 kubelet_node_status.go:75] "Attempting to register node" node="ha-487903"
	Nov 19 23:05:40 ha-487903 kubelet[1241]: E1119 23:05:40.930490    1241 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.39.15:8443/api/v1/nodes\": dial tcp 192.168.39.15:8443: connect: connection refused" node="ha-487903"
	Nov 19 23:05:40 ha-487903 kubelet[1241]: E1119 23:05:40.952206    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:41 ha-487903 kubelet[1241]: E1119 23:05:41.053431    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:41 ha-487903 kubelet[1241]: E1119 23:05:41.154724    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:41 ha-487903 kubelet[1241]: E1119 23:05:41.228064    1241 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.39.15:8443/api/v1/namespaces/default/events\": dial tcp 192.168.39.15:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-487903.18798aa1e7444f8c  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-487903,UID:ha-487903,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-487903 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-487903,},FirstTimestamp:2025-11-19 22:59:17.066641292 +0000 UTC m=+0.146753141,LastTimestamp:2025-11-19 22:59:17.066641292 +0000 UTC m=+0.146753141,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-487903,}"
	Nov 19 23:05:41 ha-487903 kubelet[1241]: E1119 23:05:41.255729    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:41 ha-487903 kubelet[1241]: E1119 23:05:41.356763    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:41 ha-487903 kubelet[1241]: E1119 23:05:41.457782    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:41 ha-487903 kubelet[1241]: E1119 23:05:41.559463    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:41 ha-487903 kubelet[1241]: E1119 23:05:41.661198    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:41 ha-487903 kubelet[1241]: E1119 23:05:41.763389    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:41 ha-487903 kubelet[1241]: E1119 23:05:41.864696    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-487903 -n ha-487903
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-487903 -n ha-487903: exit status 2 (205.392166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-487903" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (663.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (1.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-487903 node delete m03 --alsologtostderr -v 5: exit status 83 (67.536465ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-487903-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-487903"

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 23:05:42.231989  142520 out.go:360] Setting OutFile to fd 1 ...
	I1119 23:05:42.232194  142520 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:05:42.232205  142520 out.go:374] Setting ErrFile to fd 2...
	I1119 23:05:42.232209  142520 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:05:42.232735  142520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-117497/.minikube/bin
	I1119 23:05:42.233411  142520 mustload.go:66] Loading cluster: ha-487903
	I1119 23:05:42.233924  142520 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:05:42.235702  142520 host.go:66] Checking if "ha-487903" exists ...
	I1119 23:05:42.237067  142520 host.go:66] Checking if "ha-487903-m02" exists ...
	I1119 23:05:42.240027  142520 out.go:179] * The control-plane node ha-487903-m03 host is not running: state=Stopped
	I1119 23:05:42.241313  142520 out.go:179]   To start a cluster, run: "minikube start -p ha-487903"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-linux-amd64 -p ha-487903 node delete m03 --alsologtostderr -v 5": exit status 83
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-487903 status --alsologtostderr -v 5: exit status 7 (340.0248ms)

                                                
                                                
-- stdout --
	ha-487903
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-487903-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-487903-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-487903-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 23:05:42.299750  142531 out.go:360] Setting OutFile to fd 1 ...
	I1119 23:05:42.300035  142531 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:05:42.300045  142531 out.go:374] Setting ErrFile to fd 2...
	I1119 23:05:42.300049  142531 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:05:42.300209  142531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-117497/.minikube/bin
	I1119 23:05:42.300393  142531 out.go:368] Setting JSON to false
	I1119 23:05:42.300426  142531 mustload.go:66] Loading cluster: ha-487903
	I1119 23:05:42.300491  142531 notify.go:221] Checking for updates...
	I1119 23:05:42.300776  142531 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:05:42.300790  142531 status.go:174] checking status of ha-487903 ...
	I1119 23:05:42.302814  142531 status.go:371] ha-487903 host status = "Running" (err=<nil>)
	I1119 23:05:42.302836  142531 host.go:66] Checking if "ha-487903" exists ...
	I1119 23:05:42.305254  142531 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:05:42.305693  142531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:05:42.305720  142531 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:05:42.305845  142531 host.go:66] Checking if "ha-487903" exists ...
	I1119 23:05:42.306056  142531 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 23:05:42.307947  142531 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:05:42.308336  142531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:05:42.308359  142531 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:05:42.308485  142531 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:05:42.394319  142531 ssh_runner.go:195] Run: systemctl --version
	I1119 23:05:42.401569  142531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 23:05:42.421061  142531 kubeconfig.go:125] found "ha-487903" server: "https://192.168.39.254:8443"
	I1119 23:05:42.421104  142531 api_server.go:166] Checking apiserver status ...
	I1119 23:05:42.421154  142531 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1119 23:05:42.440867  142531 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1119 23:05:42.440915  142531 status.go:463] ha-487903 apiserver status = Running (err=<nil>)
	I1119 23:05:42.440929  142531 status.go:176] ha-487903 status: &{Name:ha-487903 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 23:05:42.440948  142531 status.go:174] checking status of ha-487903-m02 ...
	I1119 23:05:42.442966  142531 status.go:371] ha-487903-m02 host status = "Running" (err=<nil>)
	I1119 23:05:42.443009  142531 host.go:66] Checking if "ha-487903-m02" exists ...
	I1119 23:05:42.445927  142531 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:05:42.446407  142531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:05:42.446441  142531 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:05:42.446587  142531 host.go:66] Checking if "ha-487903-m02" exists ...
	I1119 23:05:42.446822  142531 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 23:05:42.449191  142531 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:05:42.449638  142531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:05:42.449675  142531 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:05:42.449828  142531 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:05:42.538750  142531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 23:05:42.557644  142531 kubeconfig.go:125] found "ha-487903" server: "https://192.168.39.254:8443"
	I1119 23:05:42.557675  142531 api_server.go:166] Checking apiserver status ...
	I1119 23:05:42.557714  142531 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1119 23:05:42.579040  142531 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1119 23:05:42.579069  142531 status.go:463] ha-487903-m02 apiserver status = Running (err=<nil>)
	I1119 23:05:42.579081  142531 status.go:176] ha-487903-m02 status: &{Name:ha-487903-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 23:05:42.579100  142531 status.go:174] checking status of ha-487903-m03 ...
	I1119 23:05:42.580793  142531 status.go:371] ha-487903-m03 host status = "Stopped" (err=<nil>)
	I1119 23:05:42.580826  142531 status.go:384] host is not running, skipping remaining checks
	I1119 23:05:42.580831  142531 status.go:176] ha-487903-m03 status: &{Name:ha-487903-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 23:05:42.580848  142531 status.go:174] checking status of ha-487903-m04 ...
	I1119 23:05:42.581923  142531 status.go:371] ha-487903-m04 host status = "Stopped" (err=<nil>)
	I1119 23:05:42.581937  142531 status.go:384] host is not running, skipping remaining checks
	I1119 23:05:42.581941  142531 status.go:176] ha-487903-m04 status: &{Name:ha-487903-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-487903 status --alsologtostderr -v 5" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-487903 -n ha-487903
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-487903 -n ha-487903: exit status 2 (197.472019ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
	โ”‚ COMMAND โ”‚                                                                ARGS                                                                 โ”‚  PROFILE  โ”‚  USER   โ”‚ VERSION โ”‚     START TIME      โ”‚      END TIME       โ”‚
	โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m03 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m02 sudo cat /home/docker/cp-test_ha-487903-m03_ha-487903-m02.txt                                        โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m03:/home/docker/cp-test.txt ha-487903-m04:/home/docker/cp-test_ha-487903-m03_ha-487903-m04.txt              โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m03 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test_ha-487903-m03_ha-487903-m04.txt                                        โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp testdata/cp-test.txt ha-487903-m04:/home/docker/cp-test.txt                                                            โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile651617511/001/cp-test_ha-487903-m04.txt โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m04:/home/docker/cp-test.txt ha-487903:/home/docker/cp-test_ha-487903-m04_ha-487903.txt                      โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903 sudo cat /home/docker/cp-test_ha-487903-m04_ha-487903.txt                                                โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m04:/home/docker/cp-test.txt ha-487903-m02:/home/docker/cp-test_ha-487903-m04_ha-487903-m02.txt              โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m02 sudo cat /home/docker/cp-test_ha-487903-m04_ha-487903-m02.txt                                        โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m04:/home/docker/cp-test.txt ha-487903-m03:/home/docker/cp-test_ha-487903-m04_ha-487903-m03.txt              โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m03 sudo cat /home/docker/cp-test_ha-487903-m04_ha-487903-m03.txt                                        โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ node    โ”‚ ha-487903 node stop m02 --alsologtostderr -v 5                                                                                      โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:53 UTC โ”‚
	โ”‚ node    โ”‚ ha-487903 node start m02 --alsologtostderr -v 5                                                                                     โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:53 UTC โ”‚ 19 Nov 25 22:54 UTC โ”‚
	โ”‚ node    โ”‚ ha-487903 node list --alsologtostderr -v 5                                                                                          โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:54 UTC โ”‚                     โ”‚
	โ”‚ stop    โ”‚ ha-487903 stop --alsologtostderr -v 5                                                                                               โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:54 UTC โ”‚ 19 Nov 25 22:58 UTC โ”‚
	โ”‚ start   โ”‚ ha-487903 start --wait true --alsologtostderr -v 5                                                                                  โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:58 UTC โ”‚                     โ”‚
	โ”‚ node    โ”‚ ha-487903 node list --alsologtostderr -v 5                                                                                          โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:05 UTC โ”‚                     โ”‚
	โ”‚ node    โ”‚ ha-487903 node delete m03 --alsologtostderr -v 5                                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:05 UTC โ”‚                     โ”‚
	โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:58:56
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:58:56.213053  140883 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:58:56.213329  140883 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:58:56.213337  140883 out.go:374] Setting ErrFile to fd 2...
	I1119 22:58:56.213342  140883 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:58:56.213519  140883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-117497/.minikube/bin
	I1119 22:58:56.213975  140883 out.go:368] Setting JSON to false
	I1119 22:58:56.214867  140883 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":16883,"bootTime":1763576253,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 22:58:56.215026  140883 start.go:143] virtualization: kvm guest
	I1119 22:58:56.217423  140883 out.go:179] * [ha-487903] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 22:58:56.219002  140883 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:58:56.219026  140883 notify.go:221] Checking for updates...
	I1119 22:58:56.221890  140883 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:58:56.223132  140883 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 22:58:56.224328  140883 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-117497/.minikube
	I1119 22:58:56.225456  140883 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 22:58:56.226526  140883 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:58:56.228080  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:58:56.228220  140883 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:58:56.264170  140883 out.go:179] * Using the kvm2 driver based on existing profile
	I1119 22:58:56.265437  140883 start.go:309] selected driver: kvm2
	I1119 22:58:56.265462  140883 start.go:930] validating driver "kvm2" against &{Name:ha-487903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false d
efault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:58:56.265642  140883 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:58:56.266633  140883 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:58:56.266714  140883 cni.go:84] Creating CNI manager for ""
	I1119 22:58:56.266798  140883 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1119 22:58:56.266898  140883 start.go:353] cluster config:
	{Name:ha-487903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:f
alse headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:58:56.267071  140883 iso.go:125] acquiring lock: {Name:mk95e5a645bfae75190ef550e02bd4f48b331040 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:58:56.269538  140883 out.go:179] * Starting "ha-487903" primary control-plane node in "ha-487903" cluster
	I1119 22:58:56.270926  140883 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:58:56.270958  140883 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 22:58:56.270984  140883 cache.go:65] Caching tarball of preloaded images
	I1119 22:58:56.271073  140883 preload.go:238] Found /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 22:58:56.271085  140883 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 22:58:56.271229  140883 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 22:58:56.271448  140883 start.go:360] acquireMachinesLock for ha-487903: {Name:mk31ae761a53f3900bcf08a8c89eb1fc69e23fe3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1119 22:58:56.271493  140883 start.go:364] duration metric: took 26.421ยตs to acquireMachinesLock for "ha-487903"
	I1119 22:58:56.271509  140883 start.go:96] Skipping create...Using existing machine configuration
	I1119 22:58:56.271522  140883 fix.go:54] fixHost starting: 
	I1119 22:58:56.273404  140883 fix.go:112] recreateIfNeeded on ha-487903: state=Stopped err=<nil>
	W1119 22:58:56.273427  140883 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 22:58:56.275031  140883 out.go:252] * Restarting existing kvm2 VM for "ha-487903" ...
	I1119 22:58:56.275084  140883 main.go:143] libmachine: starting domain...
	I1119 22:58:56.275096  140883 main.go:143] libmachine: ensuring networks are active...
	I1119 22:58:56.275845  140883 main.go:143] libmachine: Ensuring network default is active
	I1119 22:58:56.276258  140883 main.go:143] libmachine: Ensuring network mk-ha-487903 is active
	I1119 22:58:56.276731  140883 main.go:143] libmachine: getting domain XML...
	I1119 22:58:56.277856  140883 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>ha-487903</name>
	  <uuid>a1ad91e9-9cee-4f2a-89ce-da034e4410c0</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/ha-487903.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:a9:81:53'/>
	      <source network='mk-ha-487903'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:93:d5:3e'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1119 22:58:57.532843  140883 main.go:143] libmachine: waiting for domain to start...
	I1119 22:58:57.534321  140883 main.go:143] libmachine: domain is now running
	I1119 22:58:57.534360  140883 main.go:143] libmachine: waiting for IP...
	I1119 22:58:57.535171  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:58:57.535745  140883 main.go:143] libmachine: domain ha-487903 has current primary IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:58:57.535758  140883 main.go:143] libmachine: found domain IP: 192.168.39.15
	I1119 22:58:57.535763  140883 main.go:143] libmachine: reserving static IP address...
	I1119 22:58:57.536231  140883 main.go:143] libmachine: found host DHCP lease matching {name: "ha-487903", mac: "52:54:00:a9:81:53", ip: "192.168.39.15"} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:47:36 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:58:57.536255  140883 main.go:143] libmachine: skip adding static IP to network mk-ha-487903 - found existing host DHCP lease matching {name: "ha-487903", mac: "52:54:00:a9:81:53", ip: "192.168.39.15"}
	I1119 22:58:57.536263  140883 main.go:143] libmachine: reserved static IP address 192.168.39.15 for domain ha-487903
	I1119 22:58:57.536269  140883 main.go:143] libmachine: waiting for SSH...
	I1119 22:58:57.536284  140883 main.go:143] libmachine: Getting to WaitForSSH function...
	I1119 22:58:57.538607  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:58:57.538989  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:47:36 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:58:57.539013  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:58:57.539174  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:58:57.539442  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 22:58:57.539453  140883 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1119 22:59:00.588204  140883 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.15:22: connect: no route to host
	I1119 22:59:06.668207  140883 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.15:22: connect: no route to host
	I1119 22:59:09.789580  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:59:09.792830  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:09.793316  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:09.793339  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:09.793640  140883 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 22:59:09.793859  140883 machine.go:94] provisionDockerMachine start ...
	I1119 22:59:09.796160  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:09.796551  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:09.796574  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:09.796736  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:09.796945  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 22:59:09.796957  140883 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:59:09.920535  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1119 22:59:09.920579  140883 buildroot.go:166] provisioning hostname "ha-487903"
	I1119 22:59:09.924026  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:09.924613  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:09.924652  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:09.924920  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:09.925162  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 22:59:09.925179  140883 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-487903 && echo "ha-487903" | sudo tee /etc/hostname
	I1119 22:59:10.075390  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-487903
	
	I1119 22:59:10.078652  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.079199  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:10.079233  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.079435  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:10.079647  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 22:59:10.079675  140883 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-487903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-487903/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-487903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:59:10.221997  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:59:10.222032  140883 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21918-117497/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-117497/.minikube}
	I1119 22:59:10.222082  140883 buildroot.go:174] setting up certificates
	I1119 22:59:10.222102  140883 provision.go:84] configureAuth start
	I1119 22:59:10.225146  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.225685  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:10.225711  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.228217  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.228605  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:10.228627  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.228759  140883 provision.go:143] copyHostCerts
	I1119 22:59:10.228794  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 22:59:10.228835  140883 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem, removing ...
	I1119 22:59:10.228849  140883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 22:59:10.228933  140883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem (1078 bytes)
	I1119 22:59:10.229026  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 22:59:10.229051  140883 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem, removing ...
	I1119 22:59:10.229057  140883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 22:59:10.229096  140883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem (1123 bytes)
	I1119 22:59:10.229160  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 22:59:10.229185  140883 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem, removing ...
	I1119 22:59:10.229189  140883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 22:59:10.229230  140883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem (1675 bytes)
	I1119 22:59:10.229308  140883 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem org=jenkins.ha-487903 san=[127.0.0.1 192.168.39.15 ha-487903 localhost minikube]
	I1119 22:59:10.335910  140883 provision.go:177] copyRemoteCerts
	I1119 22:59:10.335996  140883 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:59:10.338770  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.339269  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:10.339307  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.339538  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:59:10.439975  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1119 22:59:10.440060  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1119 22:59:10.477861  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1119 22:59:10.477964  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 22:59:10.529406  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1119 22:59:10.529472  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 22:59:10.570048  140883 provision.go:87] duration metric: took 347.930624ms to configureAuth
	I1119 22:59:10.570076  140883 buildroot.go:189] setting minikube options for container-runtime
	I1119 22:59:10.570440  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:10.573510  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.573997  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:10.574034  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.574235  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:10.574507  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 22:59:10.574526  140883 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 22:59:10.838912  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 22:59:10.838950  140883 machine.go:97] duration metric: took 1.045075254s to provisionDockerMachine
	I1119 22:59:10.838968  140883 start.go:293] postStartSetup for "ha-487903" (driver="kvm2")
	I1119 22:59:10.838983  140883 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:59:10.839099  140883 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:59:10.842141  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.842656  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:10.842700  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.842857  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:59:10.941042  140883 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:59:10.946128  140883 info.go:137] Remote host: Buildroot 2025.02
	I1119 22:59:10.946154  140883 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/addons for local assets ...
	I1119 22:59:10.946218  140883 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/files for local assets ...
	I1119 22:59:10.946302  140883 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> 1213692.pem in /etc/ssl/certs
	I1119 22:59:10.946321  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /etc/ssl/certs/1213692.pem
	I1119 22:59:10.946415  140883 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:59:10.958665  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 22:59:10.989852  140883 start.go:296] duration metric: took 150.865435ms for postStartSetup
	I1119 22:59:10.989981  140883 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1119 22:59:10.992672  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.993117  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:10.993143  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.993318  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:59:11.080904  140883 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I1119 22:59:11.080983  140883 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1119 22:59:11.124462  140883 fix.go:56] duration metric: took 14.852929829s for fixHost
	I1119 22:59:11.127772  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:11.128299  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:11.128336  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:11.128547  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:11.128846  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 22:59:11.128865  140883 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1119 22:59:11.255539  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763593151.225105539
	
	I1119 22:59:11.255568  140883 fix.go:216] guest clock: 1763593151.225105539
	I1119 22:59:11.255578  140883 fix.go:229] Guest: 2025-11-19 22:59:11.225105539 +0000 UTC Remote: 2025-11-19 22:59:11.124499316 +0000 UTC m=+14.964187528 (delta=100.606223ms)
	I1119 22:59:11.255598  140883 fix.go:200] guest clock delta is within tolerance: 100.606223ms
	I1119 22:59:11.255604  140883 start.go:83] releasing machines lock for "ha-487903", held for 14.984100369s
	I1119 22:59:11.258588  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:11.259028  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:11.259061  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:11.259648  140883 ssh_runner.go:195] Run: cat /version.json
	I1119 22:59:11.259725  140883 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:59:11.262795  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:11.263203  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:11.263243  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:11.263270  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:11.263465  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:59:11.263776  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:11.263809  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:11.264018  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:59:11.345959  140883 ssh_runner.go:195] Run: systemctl --version
	I1119 22:59:11.373994  140883 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 22:59:11.522527  140883 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:59:11.531055  140883 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:59:11.531143  140883 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:59:11.555635  140883 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 22:59:11.555666  140883 start.go:496] detecting cgroup driver to use...
	I1119 22:59:11.555762  140883 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 22:59:11.592696  140883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 22:59:11.617501  140883 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:59:11.617572  140883 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:59:11.636732  140883 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:59:11.654496  140883 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:59:11.811000  140883 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:59:12.032082  140883 docker.go:234] disabling docker service ...
	I1119 22:59:12.032160  140883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:59:12.048543  140883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:59:12.064141  140883 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:59:12.225964  140883 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:59:12.368239  140883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:59:12.384716  140883 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:59:12.408056  140883 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 22:59:12.408120  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:12.421146  140883 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 22:59:12.421223  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:12.434510  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:12.447609  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:12.460732  140883 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:59:12.477217  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:12.489987  140883 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:12.511524  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:12.524517  140883 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:59:12.536463  140883 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1119 22:59:12.536536  140883 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1119 22:59:12.563021  140883 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:59:12.578130  140883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:59:12.729736  140883 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 22:59:12.855038  140883 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 22:59:12.855107  140883 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 22:59:12.860898  140883 start.go:564] Will wait 60s for crictl version
	I1119 22:59:12.860954  140883 ssh_runner.go:195] Run: which crictl
	I1119 22:59:12.865294  140883 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1119 22:59:12.912486  140883 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1119 22:59:12.912590  140883 ssh_runner.go:195] Run: crio --version
	I1119 22:59:12.943910  140883 ssh_runner.go:195] Run: crio --version
	I1119 22:59:12.976663  140883 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1119 22:59:12.980411  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:12.980805  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:12.980827  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:12.981017  140883 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1119 22:59:12.986162  140883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:59:13.003058  140883 kubeadm.go:884] updating cluster {Name:ha-487903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stor
ageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:59:13.003281  140883 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:59:13.003338  140883 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:59:13.047712  140883 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1119 22:59:13.047780  140883 ssh_runner.go:195] Run: which lz4
	I1119 22:59:13.052977  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1119 22:59:13.053081  140883 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1119 22:59:13.058671  140883 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1119 22:59:13.058708  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1119 22:59:14.697348  140883 crio.go:462] duration metric: took 1.644299269s to copy over tarball
	I1119 22:59:14.697449  140883 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1119 22:59:16.447188  140883 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.749702497s)
	I1119 22:59:16.447222  140883 crio.go:469] duration metric: took 1.749848336s to extract the tarball
	I1119 22:59:16.447231  140883 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1119 22:59:16.489289  140883 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:59:16.536108  140883 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:59:16.536132  140883 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:59:16.536140  140883 kubeadm.go:935] updating node { 192.168.39.15 8443 v1.34.1 crio true true} ...
	I1119 22:59:16.536265  140883 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-487903 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:59:16.536328  140883 ssh_runner.go:195] Run: crio config
	I1119 22:59:16.585135  140883 cni.go:84] Creating CNI manager for ""
	I1119 22:59:16.585158  140883 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1119 22:59:16.585181  140883 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:59:16.585202  140883 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.15 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-487903 NodeName:ha-487903 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:59:16.585355  140883 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-487903"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.15"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.15"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:59:16.585375  140883 kube-vip.go:115] generating kube-vip config ...
	I1119 22:59:16.585419  140883 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1119 22:59:16.615712  140883 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1119 22:59:16.615824  140883 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1119 22:59:16.615913  140883 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:59:16.633015  140883 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:59:16.633116  140883 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1119 22:59:16.646138  140883 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1119 22:59:16.668865  140883 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:59:16.691000  140883 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1119 22:59:16.713854  140883 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1119 22:59:16.736483  140883 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1119 22:59:16.741324  140883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:59:16.757055  140883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:59:16.900472  140883 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:59:16.922953  140883 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903 for IP: 192.168.39.15
	I1119 22:59:16.922982  140883 certs.go:195] generating shared ca certs ...
	I1119 22:59:16.922999  140883 certs.go:227] acquiring lock for ca certs: {Name:mk7fd1f1adfef6333505c39c1a982465562c820e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:59:16.923147  140883 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key
	I1119 22:59:16.923233  140883 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key
	I1119 22:59:16.923245  140883 certs.go:257] generating profile certs ...
	I1119 22:59:16.923340  140883 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key
	I1119 22:59:16.923369  140883 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key.4e6b2c30
	I1119 22:59:16.923388  140883 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt.4e6b2c30 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.15 192.168.39.191 192.168.39.160 192.168.39.254]
	I1119 22:59:17.222295  140883 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt.4e6b2c30 ...
	I1119 22:59:17.222330  140883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt.4e6b2c30: {Name:mk1efb8fb5e10ff1c6bc1bceec2ebc4b1a4cdce5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:59:17.222507  140883 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key.4e6b2c30 ...
	I1119 22:59:17.222521  140883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key.4e6b2c30: {Name:mk99b1381f2cff273ee01fc482a9705b00bd6fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:59:17.222598  140883 certs.go:382] copying /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt.4e6b2c30 -> /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt
	I1119 22:59:17.224167  140883 certs.go:386] copying /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key.4e6b2c30 -> /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key
	I1119 22:59:17.227659  140883 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key
	I1119 22:59:17.227687  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1119 22:59:17.227700  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1119 22:59:17.227711  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1119 22:59:17.227725  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1119 22:59:17.227746  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1119 22:59:17.227763  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1119 22:59:17.227778  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1119 22:59:17.227791  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1119 22:59:17.227853  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem (1338 bytes)
	W1119 22:59:17.227922  140883 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369_empty.pem, impossibly tiny 0 bytes
	I1119 22:59:17.227938  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 22:59:17.227968  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem (1078 bytes)
	I1119 22:59:17.228003  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:59:17.228035  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem (1675 bytes)
	I1119 22:59:17.228085  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 22:59:17.228122  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /usr/share/ca-certificates/1213692.pem
	I1119 22:59:17.228146  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:59:17.228164  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem -> /usr/share/ca-certificates/121369.pem
	I1119 22:59:17.228751  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:59:17.267457  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 22:59:17.301057  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:59:17.334334  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:59:17.369081  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 22:59:17.401525  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 22:59:17.435168  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:59:17.468258  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 22:59:17.501844  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /usr/share/ca-certificates/1213692.pem (1708 bytes)
	I1119 22:59:17.535729  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:59:17.568773  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem --> /usr/share/ca-certificates/121369.pem (1338 bytes)
	I1119 22:59:17.602167  140883 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:59:17.625050  140883 ssh_runner.go:195] Run: openssl version
	I1119 22:59:17.631971  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121369.pem && ln -fs /usr/share/ca-certificates/121369.pem /etc/ssl/certs/121369.pem"
	I1119 22:59:17.646313  140883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121369.pem
	I1119 22:59:17.652083  140883 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:54 /usr/share/ca-certificates/121369.pem
	I1119 22:59:17.652141  140883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121369.pem
	I1119 22:59:17.660153  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/121369.pem /etc/ssl/certs/51391683.0"
	I1119 22:59:17.675854  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1213692.pem && ln -fs /usr/share/ca-certificates/1213692.pem /etc/ssl/certs/1213692.pem"
	I1119 22:59:17.691421  140883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1213692.pem
	I1119 22:59:17.697623  140883 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:54 /usr/share/ca-certificates/1213692.pem
	I1119 22:59:17.697704  140883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1213692.pem
	I1119 22:59:17.706162  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1213692.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:59:17.721477  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:59:17.736953  140883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:59:17.743111  140883 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:59:17.743185  140883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:59:17.751321  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:59:17.766690  140883 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:59:17.773200  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 22:59:17.781700  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 22:59:17.790000  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 22:59:17.798411  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 22:59:17.807029  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 22:59:17.815374  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 22:59:17.823385  140883 kubeadm.go:401] StartCluster: {Name:ha-487903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Clust
erName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:59:17.823559  140883 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 22:59:17.823640  140883 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:59:17.865475  140883 cri.go:89] found id: ""
	I1119 22:59:17.865542  140883 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:59:17.879260  140883 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 22:59:17.879283  140883 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 22:59:17.879329  140883 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 22:59:17.892209  140883 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 22:59:17.892673  140883 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-487903" does not appear in /home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 22:59:17.892815  140883 kubeconfig.go:62] /home/jenkins/minikube-integration/21918-117497/kubeconfig needs updating (will repair): [kubeconfig missing "ha-487903" cluster setting kubeconfig missing "ha-487903" context setting]
	I1119 22:59:17.893101  140883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/kubeconfig: {Name:mk079b5c589536bd895a38d6eaf0adbccf891fc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:59:17.963155  140883 kapi.go:59] client config for ha-487903: &rest.Config{Host:"https://192.168.39.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key", CAFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1119 22:59:17.963616  140883 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1119 22:59:17.963633  140883 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1119 22:59:17.963638  140883 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1119 22:59:17.963643  140883 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1119 22:59:17.963647  140883 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1119 22:59:17.963661  140883 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1119 22:59:17.964167  140883 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 22:59:17.978087  140883 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.15
	I1119 22:59:17.978113  140883 kubeadm.go:602] duration metric: took 98.825282ms to restartPrimaryControlPlane
	I1119 22:59:17.978123  140883 kubeadm.go:403] duration metric: took 154.749827ms to StartCluster
	I1119 22:59:17.978140  140883 settings.go:142] acquiring lock: {Name:mk7bf46f049c1d627501587bc2954f8687f12cc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:59:17.978206  140883 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 22:59:17.978813  140883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/kubeconfig: {Name:mk079b5c589536bd895a38d6eaf0adbccf891fc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:59:18.091922  140883 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:59:18.091962  140883 start.go:242] waiting for startup goroutines ...
	I1119 22:59:18.091978  140883 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:59:18.092251  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:18.092350  140883 cache.go:107] acquiring lock: {Name:mk7073b8eca670d2c11dece9275947688dc7c859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:59:18.092431  140883 cache.go:115] /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1119 22:59:18.092446  140883 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 107.182ยตs
	I1119 22:59:18.092458  140883 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1119 22:59:18.092470  140883 cache.go:87] Successfully saved all images to host disk.
	I1119 22:59:18.092656  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:18.094431  140883 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:59:18.096799  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:18.097214  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:18.097237  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:18.097408  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:59:18.112697  140883 out.go:179] * Enabled addons: 
	I1119 22:59:18.214740  140883 crio.go:510] couldn't find preloaded image for "registry.k8s.io/pause:3.1". assuming images are not preloaded.
	I1119 22:59:18.214767  140883 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/pause:3.1]
	I1119 22:59:18.214826  140883 image.go:138] retrieving image: registry.k8s.io/pause:3.1
	I1119 22:59:18.216192  140883 image.go:181] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1119 22:59:18.232424  140883 addons.go:515] duration metric: took 140.445828ms for enable addons: enabled=[]
	I1119 22:59:18.373430  140883 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1119 22:59:18.424783  140883 cache_images.go:118] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1119 22:59:18.424834  140883 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1119 22:59:18.424902  140883 ssh_runner.go:195] Run: which crictl
	I1119 22:59:18.429834  140883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1119 22:59:18.468042  140883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1119 22:59:18.506995  140883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1119 22:59:18.543506  140883 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1119 22:59:18.543547  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 -> /var/lib/minikube/images/pause_3.1
	I1119 22:59:18.543609  140883 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.1
	I1119 22:59:18.549496  140883 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1119 22:59:18.549518  140883 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.1
	I1119 22:59:18.549580  140883 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1119 22:59:19.599140  140883 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.049525714s)
	I1119 22:59:19.599189  140883 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1119 22:59:19.599233  140883 cache_images.go:125] Successfully loaded all cached images
	I1119 22:59:19.599247  140883 cache_images.go:94] duration metric: took 1.384470883s to LoadCachedImages
	I1119 22:59:19.604407  140883 cache_images.go:264] succeeded pushing to: ha-487903
	I1119 22:59:19.604463  140883 start.go:247] waiting for cluster config update ...
	I1119 22:59:19.604477  140883 start.go:256] writing updated cluster config ...
	I1119 22:59:19.606572  140883 out.go:203] 
	I1119 22:59:19.608121  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:19.608254  140883 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 22:59:19.609863  140883 out.go:179] * Starting "ha-487903-m02" control-plane node in "ha-487903" cluster
	I1119 22:59:19.611047  140883 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:59:19.611074  140883 cache.go:65] Caching tarball of preloaded images
	I1119 22:59:19.611204  140883 preload.go:238] Found /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 22:59:19.611221  140883 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 22:59:19.611355  140883 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 22:59:19.611616  140883 start.go:360] acquireMachinesLock for ha-487903-m02: {Name:mk31ae761a53f3900bcf08a8c89eb1fc69e23fe3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1119 22:59:19.611685  140883 start.go:364] duration metric: took 40.215ยตs to acquireMachinesLock for "ha-487903-m02"
	I1119 22:59:19.611709  140883 start.go:96] Skipping create...Using existing machine configuration
	I1119 22:59:19.611717  140883 fix.go:54] fixHost starting: m02
	I1119 22:59:19.613410  140883 fix.go:112] recreateIfNeeded on ha-487903-m02: state=Stopped err=<nil>
	W1119 22:59:19.613431  140883 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 22:59:19.615098  140883 out.go:252] * Restarting existing kvm2 VM for "ha-487903-m02" ...
	I1119 22:59:19.615142  140883 main.go:143] libmachine: starting domain...
	I1119 22:59:19.615165  140883 main.go:143] libmachine: ensuring networks are active...
	I1119 22:59:19.616007  140883 main.go:143] libmachine: Ensuring network default is active
	I1119 22:59:19.616405  140883 main.go:143] libmachine: Ensuring network mk-ha-487903 is active
	I1119 22:59:19.616894  140883 main.go:143] libmachine: getting domain XML...
	I1119 22:59:19.618210  140883 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>ha-487903-m02</name>
	  <uuid>dcc51fc7-a2ff-40ae-988d-da36299d6bbc</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/ha-487903-m02.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:04:d5:70'/>
	      <source network='mk-ha-487903'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:9b:1d:f0'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1119 22:59:20.934173  140883 main.go:143] libmachine: waiting for domain to start...
	I1119 22:59:20.935608  140883 main.go:143] libmachine: domain is now running
	I1119 22:59:20.935632  140883 main.go:143] libmachine: waiting for IP...
	I1119 22:59:20.936409  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:20.936918  140883 main.go:143] libmachine: domain ha-487903-m02 has current primary IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:20.936934  140883 main.go:143] libmachine: found domain IP: 192.168.39.191
	I1119 22:59:20.936940  140883 main.go:143] libmachine: reserving static IP address...
	I1119 22:59:20.937407  140883 main.go:143] libmachine: found host DHCP lease matching {name: "ha-487903-m02", mac: "52:54:00:04:d5:70", ip: "192.168.39.191"} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:54:10 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:20.937433  140883 main.go:143] libmachine: skip adding static IP to network mk-ha-487903 - found existing host DHCP lease matching {name: "ha-487903-m02", mac: "52:54:00:04:d5:70", ip: "192.168.39.191"}
	I1119 22:59:20.937445  140883 main.go:143] libmachine: reserved static IP address 192.168.39.191 for domain ha-487903-m02
	I1119 22:59:20.937450  140883 main.go:143] libmachine: waiting for SSH...
	I1119 22:59:20.937455  140883 main.go:143] libmachine: Getting to WaitForSSH function...
	I1119 22:59:20.939837  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:20.940340  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:54:10 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:20.940366  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:20.940532  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:20.940720  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 22:59:20.940730  140883 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1119 22:59:24.012154  140883 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I1119 22:59:30.092142  140883 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I1119 22:59:33.096094  140883 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: connection refused
	I1119 22:59:36.206633  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:59:36.209899  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.210391  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:36.210411  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.210732  140883 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 22:59:36.211013  140883 machine.go:94] provisionDockerMachine start ...
	I1119 22:59:36.213527  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.213977  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:36.214002  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.214194  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:36.214405  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 22:59:36.214418  140883 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:59:36.326740  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1119 22:59:36.326781  140883 buildroot.go:166] provisioning hostname "ha-487903-m02"
	I1119 22:59:36.329446  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.329910  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:36.329941  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.330096  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:36.330305  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 22:59:36.330321  140883 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-487903-m02 && echo "ha-487903-m02" | sudo tee /etc/hostname
	I1119 22:59:36.457310  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-487903-m02
	
	I1119 22:59:36.460161  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.460619  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:36.460649  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.460898  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:36.461143  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 22:59:36.461164  140883 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-487903-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-487903-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-487903-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:59:36.581648  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:59:36.581677  140883 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21918-117497/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-117497/.minikube}
	I1119 22:59:36.581693  140883 buildroot.go:174] setting up certificates
	I1119 22:59:36.581705  140883 provision.go:84] configureAuth start
	I1119 22:59:36.585049  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.585711  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:36.585755  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.588067  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.588494  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:36.588521  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.588646  140883 provision.go:143] copyHostCerts
	I1119 22:59:36.588674  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 22:59:36.588706  140883 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem, removing ...
	I1119 22:59:36.588714  140883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 22:59:36.588769  140883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem (1675 bytes)
	I1119 22:59:36.588842  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 22:59:36.588860  140883 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem, removing ...
	I1119 22:59:36.588866  140883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 22:59:36.588903  140883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem (1078 bytes)
	I1119 22:59:36.589025  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 22:59:36.589050  140883 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem, removing ...
	I1119 22:59:36.589057  140883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 22:59:36.589079  140883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem (1123 bytes)
	I1119 22:59:36.589147  140883 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem org=jenkins.ha-487903-m02 san=[127.0.0.1 192.168.39.191 ha-487903-m02 localhost minikube]
	I1119 22:59:36.826031  140883 provision.go:177] copyRemoteCerts
	I1119 22:59:36.826092  140883 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:59:36.828610  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.829058  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:36.829082  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.829236  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 22:59:36.914853  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1119 22:59:36.914951  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 22:59:36.947443  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1119 22:59:36.947526  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1119 22:59:36.979006  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1119 22:59:36.979097  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 22:59:37.010933  140883 provision.go:87] duration metric: took 429.212672ms to configureAuth
	I1119 22:59:37.010966  140883 buildroot.go:189] setting minikube options for container-runtime
	I1119 22:59:37.011249  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:37.014321  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.014846  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:37.014890  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.015134  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:37.015408  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 22:59:37.015434  140883 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 22:59:37.258599  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 22:59:37.258634  140883 machine.go:97] duration metric: took 1.047602081s to provisionDockerMachine
	I1119 22:59:37.258649  140883 start.go:293] postStartSetup for "ha-487903-m02" (driver="kvm2")
	I1119 22:59:37.258662  140883 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:59:37.258718  140883 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:59:37.261730  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.262218  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:37.262247  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.262427  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 22:59:37.348416  140883 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:59:37.353564  140883 info.go:137] Remote host: Buildroot 2025.02
	I1119 22:59:37.353602  140883 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/addons for local assets ...
	I1119 22:59:37.353676  140883 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/files for local assets ...
	I1119 22:59:37.353750  140883 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> 1213692.pem in /etc/ssl/certs
	I1119 22:59:37.353760  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /etc/ssl/certs/1213692.pem
	I1119 22:59:37.353845  140883 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:59:37.366805  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 22:59:37.398923  140883 start.go:296] duration metric: took 140.253592ms for postStartSetup
	I1119 22:59:37.399023  140883 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1119 22:59:37.401945  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.402392  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:37.402417  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.402579  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 22:59:37.493849  140883 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I1119 22:59:37.493957  140883 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1119 22:59:37.556929  140883 fix.go:56] duration metric: took 17.945204618s for fixHost
	I1119 22:59:37.560155  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.560693  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:37.560730  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.560998  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:37.561206  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 22:59:37.561217  140883 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1119 22:59:37.681336  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763593177.645225217
	
	I1119 22:59:37.681361  140883 fix.go:216] guest clock: 1763593177.645225217
	I1119 22:59:37.681369  140883 fix.go:229] Guest: 2025-11-19 22:59:37.645225217 +0000 UTC Remote: 2025-11-19 22:59:37.55695737 +0000 UTC m=+41.396645577 (delta=88.267847ms)
	I1119 22:59:37.681385  140883 fix.go:200] guest clock delta is within tolerance: 88.267847ms
	I1119 22:59:37.681391  140883 start.go:83] releasing machines lock for "ha-487903-m02", held for 18.069691628s
	I1119 22:59:37.684191  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.684592  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:37.684617  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.686578  140883 out.go:179] * Found network options:
	I1119 22:59:37.687756  140883 out.go:179]   - NO_PROXY=192.168.39.15
	W1119 22:59:37.688974  140883 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 22:59:37.689312  140883 proxy.go:120] fail to check proxy env: Error ip not in block
	I1119 22:59:37.689391  140883 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 22:59:37.689429  140883 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:59:37.692557  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.692674  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.693088  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:37.693119  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.693175  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:37.693199  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.693322  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 22:59:37.693519  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 22:59:37.917934  140883 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:59:37.925533  140883 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:59:37.925625  140883 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:59:37.946707  140883 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 22:59:37.946736  140883 start.go:496] detecting cgroup driver to use...
	I1119 22:59:37.946815  140883 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 22:59:37.972689  140883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 22:59:37.990963  140883 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:59:37.991033  140883 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:59:38.008725  140883 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:59:38.025289  140883 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:59:38.180260  140883 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:59:38.396498  140883 docker.go:234] disabling docker service ...
	I1119 22:59:38.396561  140883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:59:38.413974  140883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:59:38.429993  140883 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:59:38.600828  140883 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:59:38.743771  140883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:59:38.761006  140883 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:59:38.784784  140883 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 22:59:38.784849  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:38.797617  140883 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 22:59:38.797682  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:38.810823  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:38.824064  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:38.837310  140883 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:59:38.851169  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:38.864106  140883 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:38.884838  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:38.897998  140883 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:59:38.909976  140883 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1119 22:59:38.910055  140883 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1119 22:59:38.933644  140883 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:59:38.947853  140883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:59:39.102667  140883 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 22:59:39.237328  140883 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 22:59:39.237425  140883 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 22:59:39.244051  140883 start.go:564] Will wait 60s for crictl version
	I1119 22:59:39.244122  140883 ssh_runner.go:195] Run: which crictl
	I1119 22:59:39.248522  140883 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1119 22:59:39.290126  140883 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1119 22:59:39.290249  140883 ssh_runner.go:195] Run: crio --version
	I1119 22:59:39.321443  140883 ssh_runner.go:195] Run: crio --version
	I1119 22:59:39.354869  140883 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1119 22:59:39.356230  140883 out.go:179]   - env NO_PROXY=192.168.39.15
	I1119 22:59:39.359840  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:39.360302  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:39.360329  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:39.360492  140883 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1119 22:59:39.365473  140883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:59:39.382234  140883 mustload.go:66] Loading cluster: ha-487903
	I1119 22:59:39.382499  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:39.384228  140883 host.go:66] Checking if "ha-487903" exists ...
	I1119 22:59:39.384424  140883 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903 for IP: 192.168.39.191
	I1119 22:59:39.384434  140883 certs.go:195] generating shared ca certs ...
	I1119 22:59:39.384461  140883 certs.go:227] acquiring lock for ca certs: {Name:mk7fd1f1adfef6333505c39c1a982465562c820e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:59:39.384590  140883 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key
	I1119 22:59:39.384635  140883 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key
	I1119 22:59:39.384645  140883 certs.go:257] generating profile certs ...
	I1119 22:59:39.384719  140883 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key
	I1119 22:59:39.384773  140883 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key.4e640f1f
	I1119 22:59:39.384805  140883 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key
	I1119 22:59:39.384819  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1119 22:59:39.384832  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1119 22:59:39.384842  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1119 22:59:39.384852  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1119 22:59:39.384862  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1119 22:59:39.384884  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1119 22:59:39.384898  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1119 22:59:39.384910  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1119 22:59:39.384960  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem (1338 bytes)
	W1119 22:59:39.384991  140883 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369_empty.pem, impossibly tiny 0 bytes
	I1119 22:59:39.385000  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 22:59:39.385020  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem (1078 bytes)
	I1119 22:59:39.385051  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:59:39.385085  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem (1675 bytes)
	I1119 22:59:39.385135  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 22:59:39.385162  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem -> /usr/share/ca-certificates/121369.pem
	I1119 22:59:39.385175  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /usr/share/ca-certificates/1213692.pem
	I1119 22:59:39.385187  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:59:39.387504  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:39.387909  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:39.387931  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:39.388082  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:59:39.466324  140883 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1119 22:59:39.472238  140883 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1119 22:59:39.488702  140883 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1119 22:59:39.494809  140883 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1119 22:59:39.508912  140883 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1119 22:59:39.514176  140883 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1119 22:59:39.528869  140883 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1119 22:59:39.534452  140883 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1119 22:59:39.547082  140883 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1119 22:59:39.552126  140883 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1119 22:59:39.565359  140883 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1119 22:59:39.570444  140883 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1119 22:59:39.583069  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:59:39.616063  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 22:59:39.649495  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:59:39.681835  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:59:39.717138  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 22:59:39.749821  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 22:59:39.782202  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:59:39.813910  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 22:59:39.846088  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem --> /usr/share/ca-certificates/121369.pem (1338 bytes)
	I1119 22:59:39.879164  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /usr/share/ca-certificates/1213692.pem (1708 bytes)
	I1119 22:59:39.911376  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:59:39.944000  140883 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1119 22:59:39.967306  140883 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1119 22:59:39.990350  140883 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1119 22:59:40.013301  140883 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1119 22:59:40.037246  140883 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1119 22:59:40.060715  140883 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1119 22:59:40.086564  140883 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1119 22:59:40.109633  140883 ssh_runner.go:195] Run: openssl version
	I1119 22:59:40.116643  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1213692.pem && ln -fs /usr/share/ca-certificates/1213692.pem /etc/ssl/certs/1213692.pem"
	I1119 22:59:40.130925  140883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1213692.pem
	I1119 22:59:40.136232  140883 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:54 /usr/share/ca-certificates/1213692.pem
	I1119 22:59:40.136304  140883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1213692.pem
	I1119 22:59:40.144024  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1213692.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:59:40.158532  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:59:40.172833  140883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:59:40.178370  140883 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:59:40.178436  140883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:59:40.186337  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:59:40.203312  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121369.pem && ln -fs /usr/share/ca-certificates/121369.pem /etc/ssl/certs/121369.pem"
	I1119 22:59:40.218899  140883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121369.pem
	I1119 22:59:40.224542  140883 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:54 /usr/share/ca-certificates/121369.pem
	I1119 22:59:40.224604  140883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121369.pem
	I1119 22:59:40.232341  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/121369.pem /etc/ssl/certs/51391683.0"
	I1119 22:59:40.246745  140883 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:59:40.252498  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 22:59:40.260177  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 22:59:40.267967  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 22:59:40.275670  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 22:59:40.282924  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 22:59:40.290564  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 22:59:40.297918  140883 kubeadm.go:935] updating node {m02 192.168.39.191 8443 v1.34.1 crio true true} ...
	I1119 22:59:40.298017  140883 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-487903-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.191
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:59:40.298041  140883 kube-vip.go:115] generating kube-vip config ...
	I1119 22:59:40.298079  140883 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1119 22:59:40.326946  140883 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1119 22:59:40.327021  140883 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1119 22:59:40.327086  140883 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:59:40.341513  140883 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:59:40.341602  140883 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1119 22:59:40.355326  140883 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1119 22:59:40.377667  140883 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:59:40.398672  140883 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1119 22:59:40.420213  140883 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1119 22:59:40.424583  140883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:59:40.440499  140883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:59:40.591016  140883 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:59:40.625379  140883 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:59:40.625713  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:40.625790  140883 cache.go:107] acquiring lock: {Name:mk7073b8eca670d2c11dece9275947688dc7c859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:59:40.625904  140883 cache.go:115] /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1119 22:59:40.625917  140883 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 136.601ยตs
	I1119 22:59:40.625925  140883 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1119 22:59:40.625932  140883 cache.go:87] Successfully saved all images to host disk.
	I1119 22:59:40.626121  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:40.628127  140883 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:59:40.628799  140883 out.go:179] * Verifying Kubernetes components...
	I1119 22:59:40.630163  140883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:59:40.631018  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:40.631564  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:40.631591  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:40.631793  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:59:40.839364  140883 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:59:40.839885  140883 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:59:40.839904  140883 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:59:40.842028  140883 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:59:40.844857  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:40.845326  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:40.845355  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:40.845505  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 22:59:40.874910  140883 kapi.go:59] client config for ha-487903: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key", CAFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1119 22:59:40.875019  140883 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.15:8443
	I1119 22:59:40.875298  140883 node_ready.go:35] waiting up to 6m0s for node "ha-487903-m02" to be "Ready" ...
	I1119 22:59:41.009083  140883 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:59:41.009116  140883 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:59:41.012633  140883 cache_images.go:264] succeeded pushing to: ha-487903 ha-487903-m02
	W1119 22:59:42.877100  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 22:59:45.376962  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 22:59:47.876896  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 22:59:50.376621  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 22:59:52.876228  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 22:59:54.876713  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 22:59:57.376562  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 22:59:59.377033  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:01.876165  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:03.876776  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:06.376476  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:08.377072  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:10.876811  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:13.377002  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:15.876574  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:18.376735  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:20.876598  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:23.376586  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:25.876567  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:28.376801  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:30.876662  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:32.877033  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:35.376969  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:37.876252  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:39.876923  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:42.376913  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:44.876958  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:47.376642  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:49.376843  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:51.876453  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:54.376840  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:56.876275  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:58.876988  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:01.376794  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:03.876458  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:05.876958  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:08.377013  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:10.876929  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:13.376449  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:15.376525  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:17.376595  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:19.876563  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:22.376426  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:24.376713  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:26.376833  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:28.876508  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:30.876839  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:32.877099  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:35.377181  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:37.876444  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:39.877103  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:42.376270  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:44.376706  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:46.876968  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:49.376048  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:51.376320  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:53.376410  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:55.376492  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:57.876667  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:59.876765  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:02.376272  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:04.376314  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:06.376754  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:08.877042  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:11.376455  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:13.376516  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:15.376798  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:17.377028  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:19.876336  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:22.376568  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:24.876573  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:27.376833  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:29.876498  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:31.876574  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:34.376621  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:36.376848  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:38.877032  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:41.376174  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:43.377081  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:45.876182  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:48.376695  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:50.876795  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:52.876945  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:55.377185  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:57.876221  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:59.876403  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:01.876497  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:04.376656  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:06.376862  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:08.876459  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:11.376839  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:13.877025  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:16.376692  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:18.377137  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:20.876509  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:22.876756  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:25.377113  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:27.876224  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:29.876784  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:31.877072  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:34.376176  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:36.876300  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:38.876430  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:41.376746  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:43.876333  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:46.376700  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:48.376946  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:50.376983  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:52.876447  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:55.376396  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:57.876511  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:00.376395  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:02.376931  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:04.876111  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:06.876594  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:09.376360  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:11.377039  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:13.876739  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:15.876954  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:18.376383  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:20.376729  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:22.877074  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:25.376970  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:27.377128  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:29.876794  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:32.376301  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:34.876592  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:37.376959  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:39.876221  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:41.876361  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:43.876530  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:45.877025  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:48.376319  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:50.376838  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:52.376995  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:54.876470  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:56.876815  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:59.376319  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:01.376715  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:03.376967  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:05.876573  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:07.877236  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:10.376866  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:12.876650  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:15.376722  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:17.876558  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:20.376338  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:22.876342  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:25.376231  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:27.876076  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:30.376254  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:32.376759  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:34.876778  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:37.376154  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:39.377025  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	I1119 23:05:40.875796  140883 node_ready.go:38] duration metric: took 6m0.000466447s for node "ha-487903-m02" to be "Ready" ...
	I1119 23:05:40.877762  140883 out.go:203] 
	W1119 23:05:40.878901  140883 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1119 23:05:40.878921  140883 out.go:285] * 
	W1119 23:05:40.880854  140883 out.go:308] โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
	โ”‚                                                                                             โ”‚
	โ”‚    * If the above advice does not help, please let us know:                                 โ”‚
	โ”‚      https://github.com/kubernetes/minikube/issues/new/choose                               โ”‚
	โ”‚                                                                                             โ”‚
	โ”‚    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    โ”‚
	โ”‚                                                                                             โ”‚
	โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
	I1119 23:05:40.882387  140883 out.go:203] 
	
	
	==> CRI-O <==
	Nov 19 23:05:43 ha-487903 crio[1097]: time="2025-11-19 23:05:43.157720196Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:4ac5d7afe9e29dd583edf8963dd85b20c4a56a64ecff695db46f457ad0225861,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-487903,Uid:f2660d05a38d7f409ed63a1278c85d94,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763593159468036899,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2660d05a38d7f409ed63a1278c85d94,},Annotations:map[string]string{kubernetes.io/config.hash: f2660d05a38d7f409ed63a1278c85d94,kubernetes.io/config.seen: 2025-11-19T22:59:17.014240925Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f3327f08-6c2f-44e6-a960-e799466bbcdd name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 19 23:05:43 ha-487903 crio[1097]: time="2025-11-19 23:05:43.159229301Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a29aee94-01a3-4dbd-b133-26f6855ae97a name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:43 ha-487903 crio[1097]: time="2025-11-19 23:05:43.161250806Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a29aee94-01a3-4dbd-b133-26f6855ae97a name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:43 ha-487903 crio[1097]: time="2025-11-19 23:05:43.161368128Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b7e08202a351cd56ba9649d7ab5ca6f63a8642b11fbaa795510df9b62430ab9,PodSandboxId:4ac5d7afe9e29dd583edf8963dd85b20c4a56a64ecff695db46f457ad0225861,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce5ff7916975f8df7d464b4e56a967792442b9e79944fd74684cc749b281df38,State:CONTAINER_RUNNING,CreatedAt:1763593161042894056,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2660d05a38d7f409ed63a1278c85d94,},Annotations:map[string]string{io.kubernetes.container.hash: 7c465d42,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a29aee94-01a3-4dbd-b133-26f6855ae97a name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:43 ha-487903 crio[1097]: time="2025-11-19 23:05:43.169278836Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=54fb9311-adb4-4bcc-ae96-9c4597ca9626 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:05:43 ha-487903 crio[1097]: time="2025-11-19 23:05:43.169465637Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=54fb9311-adb4-4bcc-ae96-9c4597ca9626 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:05:43 ha-487903 crio[1097]: time="2025-11-19 23:05:43.170784969Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1ded3fc1-679e-43e8-bd73-b94320e61d16 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:05:43 ha-487903 crio[1097]: time="2025-11-19 23:05:43.171696130Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763593543171674689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147868,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ded3fc1-679e-43e8-bd73-b94320e61d16 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:05:43 ha-487903 crio[1097]: time="2025-11-19 23:05:43.172463632Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=332a4477-7ac2-4f33-8d01-087761e30f90 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:43 ha-487903 crio[1097]: time="2025-11-19 23:05:43.172588463Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=332a4477-7ac2-4f33-8d01-087761e30f90 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:43 ha-487903 crio[1097]: time="2025-11-19 23:05:43.172753234Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b7e08202a351cd56ba9649d7ab5ca6f63a8642b11fbaa795510df9b62430ab9,PodSandboxId:4ac5d7afe9e29dd583edf8963dd85b20c4a56a64ecff695db46f457ad0225861,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce5ff7916975f8df7d464b4e56a967792442b9e79944fd74684cc749b281df38,State:CONTAINER_RUNNING,CreatedAt:1763593161042894056,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2660d05a38d7f409ed63a1278c85d94,},Annotations:map[string]string{io.kubernetes.container.hash: 7c465d42,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=332a4477-7ac2-4f33-8d01-087761e30f90 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:43 ha-487903 crio[1097]: time="2025-11-19 23:05:43.208588323Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=499f06c7-ceb9-46f4-861b-591493f4728f name=/runtime.v1.RuntimeService/Version
	Nov 19 23:05:43 ha-487903 crio[1097]: time="2025-11-19 23:05:43.208688222Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=499f06c7-ceb9-46f4-861b-591493f4728f name=/runtime.v1.RuntimeService/Version
	Nov 19 23:05:43 ha-487903 crio[1097]: time="2025-11-19 23:05:43.209942000Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=25e6c416-14fd-4dda-bc59-988c20395f64 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:05:43 ha-487903 crio[1097]: time="2025-11-19 23:05:43.210472954Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763593543210441227,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147868,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=25e6c416-14fd-4dda-bc59-988c20395f64 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:05:43 ha-487903 crio[1097]: time="2025-11-19 23:05:43.211003195Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f6c7c223-f709-483b-b00b-eb8d89cccaaf name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:43 ha-487903 crio[1097]: time="2025-11-19 23:05:43.211262913Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f6c7c223-f709-483b-b00b-eb8d89cccaaf name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:43 ha-487903 crio[1097]: time="2025-11-19 23:05:43.211512977Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b7e08202a351cd56ba9649d7ab5ca6f63a8642b11fbaa795510df9b62430ab9,PodSandboxId:4ac5d7afe9e29dd583edf8963dd85b20c4a56a64ecff695db46f457ad0225861,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce5ff7916975f8df7d464b4e56a967792442b9e79944fd74684cc749b281df38,State:CONTAINER_RUNNING,CreatedAt:1763593161042894056,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2660d05a38d7f409ed63a1278c85d94,},Annotations:map[string]string{io.kubernetes.container.hash: 7c465d42,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f6c7c223-f709-483b-b00b-eb8d89cccaaf name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:43 ha-487903 crio[1097]: time="2025-11-19 23:05:43.248913880Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c5b1158e-452f-4bc6-b67a-e6ad6200f3f4 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:05:43 ha-487903 crio[1097]: time="2025-11-19 23:05:43.249042965Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c5b1158e-452f-4bc6-b67a-e6ad6200f3f4 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:05:43 ha-487903 crio[1097]: time="2025-11-19 23:05:43.251058761Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ee930f33-7cf9-44e4-9119-ce8cbb456dfa name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:05:43 ha-487903 crio[1097]: time="2025-11-19 23:05:43.251588103Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763593543251556724,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147868,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee930f33-7cf9-44e4-9119-ce8cbb456dfa name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:05:43 ha-487903 crio[1097]: time="2025-11-19 23:05:43.252077625Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e82e0751-96f0-4ca3-a8c4-9d713c638467 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:43 ha-487903 crio[1097]: time="2025-11-19 23:05:43.252189326Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e82e0751-96f0-4ca3-a8c4-9d713c638467 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:43 ha-487903 crio[1097]: time="2025-11-19 23:05:43.252244614Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b7e08202a351cd56ba9649d7ab5ca6f63a8642b11fbaa795510df9b62430ab9,PodSandboxId:4ac5d7afe9e29dd583edf8963dd85b20c4a56a64ecff695db46f457ad0225861,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce5ff7916975f8df7d464b4e56a967792442b9e79944fd74684cc749b281df38,State:CONTAINER_RUNNING,CreatedAt:1763593161042894056,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2660d05a38d7f409ed63a1278c85d94,},Annotations:map[string]string{io.kubernetes.container.hash: 7c465d42,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e82e0751-96f0-4ca3-a8c4-9d713c638467 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	6b7e08202a351       ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178   6 minutes ago       Running             kube-vip            0                   4ac5d7afe9e29       kube-vip-ha-487903
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1119 23:05:43.393797    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E1119 23:05:43.394082    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E1119 23:05:43.395512    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E1119 23:05:43.395834    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E1119 23:05:43.397277    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Nov19 22:58] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000000] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[Nov19 22:59] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002202] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.908092] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.107363] kauditd_printk_skb: 88 callbacks suppressed
	[  +0.029379] kauditd_printk_skb: 142 callbacks suppressed
	
	
	==> kernel <==
	 23:05:43 up 6 min,  0 users,  load average: 0.00, 0.06, 0.04
	Linux ha-487903 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 21:15:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kubelet <==
	Nov 19 23:05:41 ha-487903 kubelet[1241]: E1119 23:05:41.228064    1241 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.39.15:8443/api/v1/namespaces/default/events\": dial tcp 192.168.39.15:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-487903.18798aa1e7444f8c  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-487903,UID:ha-487903,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-487903 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-487903,},FirstTimestamp:2025-11-19 22:59:17.066641292 +0000 UTC m=+0.146753141,LastTimestamp:2025-11-19 22:59:17.066641292 +0000 UTC m=+0.146753141,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-487903,}"
	Nov 19 23:05:41 ha-487903 kubelet[1241]: E1119 23:05:41.255729    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:41 ha-487903 kubelet[1241]: E1119 23:05:41.356763    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:41 ha-487903 kubelet[1241]: E1119 23:05:41.457782    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:41 ha-487903 kubelet[1241]: E1119 23:05:41.559463    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:41 ha-487903 kubelet[1241]: E1119 23:05:41.661198    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:41 ha-487903 kubelet[1241]: E1119 23:05:41.763389    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:41 ha-487903 kubelet[1241]: E1119 23:05:41.864696    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:41 ha-487903 kubelet[1241]: E1119 23:05:41.966253    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:42 ha-487903 kubelet[1241]: E1119 23:05:42.058250    1241 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-487903\" not found" node="ha-487903"
	Nov 19 23:05:42 ha-487903 kubelet[1241]: E1119 23:05:42.067022    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:42 ha-487903 kubelet[1241]: E1119 23:05:42.168709    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:42 ha-487903 kubelet[1241]: E1119 23:05:42.269949    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:42 ha-487903 kubelet[1241]: E1119 23:05:42.371934    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:42 ha-487903 kubelet[1241]: E1119 23:05:42.473289    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:42 ha-487903 kubelet[1241]: E1119 23:05:42.573815    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:42 ha-487903 kubelet[1241]: E1119 23:05:42.675244    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:42 ha-487903 kubelet[1241]: E1119 23:05:42.776288    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:42 ha-487903 kubelet[1241]: E1119 23:05:42.877583    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:42 ha-487903 kubelet[1241]: E1119 23:05:42.978249    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:43 ha-487903 kubelet[1241]: E1119 23:05:43.079483    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:43 ha-487903 kubelet[1241]: E1119 23:05:43.181423    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:43 ha-487903 kubelet[1241]: E1119 23:05:43.282813    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:43 ha-487903 kubelet[1241]: E1119 23:05:43.309587    1241 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.39.15:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.15:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Nov 19 23:05:43 ha-487903 kubelet[1241]: E1119 23:05:43.384455    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-487903 -n ha-487903
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-487903 -n ha-487903: exit status 2 (206.786603ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-487903" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (1.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-487903" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-487903\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-487903\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\
":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-487903\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.15\",\"Port\":8443,\"Kub
ernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.191\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.160\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.187\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"
kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\"
:[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-487903 -n ha-487903
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-487903 -n ha-487903: exit status 2 (203.217187ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
	โ”‚ COMMAND โ”‚                                                                ARGS                                                                 โ”‚  PROFILE  โ”‚  USER   โ”‚ VERSION โ”‚     START TIME      โ”‚      END TIME       โ”‚
	โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m03 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m02 sudo cat /home/docker/cp-test_ha-487903-m03_ha-487903-m02.txt                                        โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m03:/home/docker/cp-test.txt ha-487903-m04:/home/docker/cp-test_ha-487903-m03_ha-487903-m04.txt              โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m03 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test_ha-487903-m03_ha-487903-m04.txt                                        โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp testdata/cp-test.txt ha-487903-m04:/home/docker/cp-test.txt                                                            โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile651617511/001/cp-test_ha-487903-m04.txt โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m04:/home/docker/cp-test.txt ha-487903:/home/docker/cp-test_ha-487903-m04_ha-487903.txt                      โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903 sudo cat /home/docker/cp-test_ha-487903-m04_ha-487903.txt                                                โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m04:/home/docker/cp-test.txt ha-487903-m02:/home/docker/cp-test_ha-487903-m04_ha-487903-m02.txt              โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m02 sudo cat /home/docker/cp-test_ha-487903-m04_ha-487903-m02.txt                                        โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m04:/home/docker/cp-test.txt ha-487903-m03:/home/docker/cp-test_ha-487903-m04_ha-487903-m03.txt              โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m03 sudo cat /home/docker/cp-test_ha-487903-m04_ha-487903-m03.txt                                        โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ node    โ”‚ ha-487903 node stop m02 --alsologtostderr -v 5                                                                                      โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:53 UTC โ”‚
	โ”‚ node    โ”‚ ha-487903 node start m02 --alsologtostderr -v 5                                                                                     โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:53 UTC โ”‚ 19 Nov 25 22:54 UTC โ”‚
	โ”‚ node    โ”‚ ha-487903 node list --alsologtostderr -v 5                                                                                          โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:54 UTC โ”‚                     โ”‚
	โ”‚ stop    โ”‚ ha-487903 stop --alsologtostderr -v 5                                                                                               โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:54 UTC โ”‚ 19 Nov 25 22:58 UTC โ”‚
	โ”‚ start   โ”‚ ha-487903 start --wait true --alsologtostderr -v 5                                                                                  โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:58 UTC โ”‚                     โ”‚
	โ”‚ node    โ”‚ ha-487903 node list --alsologtostderr -v 5                                                                                          โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:05 UTC โ”‚                     โ”‚
	โ”‚ node    โ”‚ ha-487903 node delete m03 --alsologtostderr -v 5                                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:05 UTC โ”‚                     โ”‚
	โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:58:56
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:58:56.213053  140883 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:58:56.213329  140883 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:58:56.213337  140883 out.go:374] Setting ErrFile to fd 2...
	I1119 22:58:56.213342  140883 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:58:56.213519  140883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-117497/.minikube/bin
	I1119 22:58:56.213975  140883 out.go:368] Setting JSON to false
	I1119 22:58:56.214867  140883 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":16883,"bootTime":1763576253,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 22:58:56.215026  140883 start.go:143] virtualization: kvm guest
	I1119 22:58:56.217423  140883 out.go:179] * [ha-487903] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 22:58:56.219002  140883 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:58:56.219026  140883 notify.go:221] Checking for updates...
	I1119 22:58:56.221890  140883 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:58:56.223132  140883 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 22:58:56.224328  140883 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-117497/.minikube
	I1119 22:58:56.225456  140883 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 22:58:56.226526  140883 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:58:56.228080  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:58:56.228220  140883 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:58:56.264170  140883 out.go:179] * Using the kvm2 driver based on existing profile
	I1119 22:58:56.265437  140883 start.go:309] selected driver: kvm2
	I1119 22:58:56.265462  140883 start.go:930] validating driver "kvm2" against &{Name:ha-487903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false d
efault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:58:56.265642  140883 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:58:56.266633  140883 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:58:56.266714  140883 cni.go:84] Creating CNI manager for ""
	I1119 22:58:56.266798  140883 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1119 22:58:56.266898  140883 start.go:353] cluster config:
	{Name:ha-487903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:f
alse headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:58:56.267071  140883 iso.go:125] acquiring lock: {Name:mk95e5a645bfae75190ef550e02bd4f48b331040 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:58:56.269538  140883 out.go:179] * Starting "ha-487903" primary control-plane node in "ha-487903" cluster
	I1119 22:58:56.270926  140883 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:58:56.270958  140883 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 22:58:56.270984  140883 cache.go:65] Caching tarball of preloaded images
	I1119 22:58:56.271073  140883 preload.go:238] Found /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 22:58:56.271085  140883 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 22:58:56.271229  140883 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 22:58:56.271448  140883 start.go:360] acquireMachinesLock for ha-487903: {Name:mk31ae761a53f3900bcf08a8c89eb1fc69e23fe3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1119 22:58:56.271493  140883 start.go:364] duration metric: took 26.421ยตs to acquireMachinesLock for "ha-487903"
	I1119 22:58:56.271509  140883 start.go:96] Skipping create...Using existing machine configuration
	I1119 22:58:56.271522  140883 fix.go:54] fixHost starting: 
	I1119 22:58:56.273404  140883 fix.go:112] recreateIfNeeded on ha-487903: state=Stopped err=<nil>
	W1119 22:58:56.273427  140883 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 22:58:56.275031  140883 out.go:252] * Restarting existing kvm2 VM for "ha-487903" ...
	I1119 22:58:56.275084  140883 main.go:143] libmachine: starting domain...
	I1119 22:58:56.275096  140883 main.go:143] libmachine: ensuring networks are active...
	I1119 22:58:56.275845  140883 main.go:143] libmachine: Ensuring network default is active
	I1119 22:58:56.276258  140883 main.go:143] libmachine: Ensuring network mk-ha-487903 is active
	I1119 22:58:56.276731  140883 main.go:143] libmachine: getting domain XML...
	I1119 22:58:56.277856  140883 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>ha-487903</name>
	  <uuid>a1ad91e9-9cee-4f2a-89ce-da034e4410c0</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/ha-487903.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:a9:81:53'/>
	      <source network='mk-ha-487903'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:93:d5:3e'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1119 22:58:57.532843  140883 main.go:143] libmachine: waiting for domain to start...
	I1119 22:58:57.534321  140883 main.go:143] libmachine: domain is now running
	I1119 22:58:57.534360  140883 main.go:143] libmachine: waiting for IP...
	I1119 22:58:57.535171  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:58:57.535745  140883 main.go:143] libmachine: domain ha-487903 has current primary IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:58:57.535758  140883 main.go:143] libmachine: found domain IP: 192.168.39.15
	I1119 22:58:57.535763  140883 main.go:143] libmachine: reserving static IP address...
	I1119 22:58:57.536231  140883 main.go:143] libmachine: found host DHCP lease matching {name: "ha-487903", mac: "52:54:00:a9:81:53", ip: "192.168.39.15"} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:47:36 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:58:57.536255  140883 main.go:143] libmachine: skip adding static IP to network mk-ha-487903 - found existing host DHCP lease matching {name: "ha-487903", mac: "52:54:00:a9:81:53", ip: "192.168.39.15"}
	I1119 22:58:57.536263  140883 main.go:143] libmachine: reserved static IP address 192.168.39.15 for domain ha-487903
	I1119 22:58:57.536269  140883 main.go:143] libmachine: waiting for SSH...
	I1119 22:58:57.536284  140883 main.go:143] libmachine: Getting to WaitForSSH function...
	I1119 22:58:57.538607  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:58:57.538989  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:47:36 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:58:57.539013  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:58:57.539174  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:58:57.539442  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 22:58:57.539453  140883 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1119 22:59:00.588204  140883 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.15:22: connect: no route to host
	I1119 22:59:06.668207  140883 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.15:22: connect: no route to host
	I1119 22:59:09.789580  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:59:09.792830  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:09.793316  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:09.793339  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:09.793640  140883 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 22:59:09.793859  140883 machine.go:94] provisionDockerMachine start ...
	I1119 22:59:09.796160  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:09.796551  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:09.796574  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:09.796736  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:09.796945  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 22:59:09.796957  140883 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:59:09.920535  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1119 22:59:09.920579  140883 buildroot.go:166] provisioning hostname "ha-487903"
	I1119 22:59:09.924026  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:09.924613  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:09.924652  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:09.924920  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:09.925162  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 22:59:09.925179  140883 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-487903 && echo "ha-487903" | sudo tee /etc/hostname
	I1119 22:59:10.075390  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-487903
	
	I1119 22:59:10.078652  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.079199  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:10.079233  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.079435  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:10.079647  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 22:59:10.079675  140883 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-487903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-487903/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-487903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:59:10.221997  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:59:10.222032  140883 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21918-117497/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-117497/.minikube}
	I1119 22:59:10.222082  140883 buildroot.go:174] setting up certificates
	I1119 22:59:10.222102  140883 provision.go:84] configureAuth start
	I1119 22:59:10.225146  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.225685  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:10.225711  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.228217  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.228605  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:10.228627  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.228759  140883 provision.go:143] copyHostCerts
	I1119 22:59:10.228794  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 22:59:10.228835  140883 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem, removing ...
	I1119 22:59:10.228849  140883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 22:59:10.228933  140883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem (1078 bytes)
	I1119 22:59:10.229026  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 22:59:10.229051  140883 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem, removing ...
	I1119 22:59:10.229057  140883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 22:59:10.229096  140883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem (1123 bytes)
	I1119 22:59:10.229160  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 22:59:10.229185  140883 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem, removing ...
	I1119 22:59:10.229189  140883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 22:59:10.229230  140883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem (1675 bytes)
	I1119 22:59:10.229308  140883 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem org=jenkins.ha-487903 san=[127.0.0.1 192.168.39.15 ha-487903 localhost minikube]
	I1119 22:59:10.335910  140883 provision.go:177] copyRemoteCerts
	I1119 22:59:10.335996  140883 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:59:10.338770  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.339269  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:10.339307  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.339538  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:59:10.439975  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1119 22:59:10.440060  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1119 22:59:10.477861  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1119 22:59:10.477964  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 22:59:10.529406  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1119 22:59:10.529472  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 22:59:10.570048  140883 provision.go:87] duration metric: took 347.930624ms to configureAuth
	I1119 22:59:10.570076  140883 buildroot.go:189] setting minikube options for container-runtime
	I1119 22:59:10.570440  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:10.573510  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.573997  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:10.574034  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.574235  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:10.574507  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 22:59:10.574526  140883 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 22:59:10.838912  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 22:59:10.838950  140883 machine.go:97] duration metric: took 1.045075254s to provisionDockerMachine
	I1119 22:59:10.838968  140883 start.go:293] postStartSetup for "ha-487903" (driver="kvm2")
	I1119 22:59:10.838983  140883 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:59:10.839099  140883 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:59:10.842141  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.842656  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:10.842700  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.842857  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:59:10.941042  140883 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:59:10.946128  140883 info.go:137] Remote host: Buildroot 2025.02
	I1119 22:59:10.946154  140883 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/addons for local assets ...
	I1119 22:59:10.946218  140883 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/files for local assets ...
	I1119 22:59:10.946302  140883 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> 1213692.pem in /etc/ssl/certs
	I1119 22:59:10.946321  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /etc/ssl/certs/1213692.pem
	I1119 22:59:10.946415  140883 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:59:10.958665  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 22:59:10.989852  140883 start.go:296] duration metric: took 150.865435ms for postStartSetup
	I1119 22:59:10.989981  140883 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1119 22:59:10.992672  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.993117  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:10.993143  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:10.993318  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:59:11.080904  140883 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I1119 22:59:11.080983  140883 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1119 22:59:11.124462  140883 fix.go:56] duration metric: took 14.852929829s for fixHost
	I1119 22:59:11.127772  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:11.128299  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:11.128336  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:11.128547  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:11.128846  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 22:59:11.128865  140883 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1119 22:59:11.255539  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763593151.225105539
	
	I1119 22:59:11.255568  140883 fix.go:216] guest clock: 1763593151.225105539
	I1119 22:59:11.255578  140883 fix.go:229] Guest: 2025-11-19 22:59:11.225105539 +0000 UTC Remote: 2025-11-19 22:59:11.124499316 +0000 UTC m=+14.964187528 (delta=100.606223ms)
	I1119 22:59:11.255598  140883 fix.go:200] guest clock delta is within tolerance: 100.606223ms
	I1119 22:59:11.255604  140883 start.go:83] releasing machines lock for "ha-487903", held for 14.984100369s
	I1119 22:59:11.258588  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:11.259028  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:11.259061  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:11.259648  140883 ssh_runner.go:195] Run: cat /version.json
	I1119 22:59:11.259725  140883 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:59:11.262795  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:11.263203  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:11.263243  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:11.263270  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:11.263465  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:59:11.263776  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:11.263809  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:11.264018  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:59:11.345959  140883 ssh_runner.go:195] Run: systemctl --version
	I1119 22:59:11.373994  140883 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 22:59:11.522527  140883 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:59:11.531055  140883 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:59:11.531143  140883 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:59:11.555635  140883 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 22:59:11.555666  140883 start.go:496] detecting cgroup driver to use...
	I1119 22:59:11.555762  140883 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 22:59:11.592696  140883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 22:59:11.617501  140883 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:59:11.617572  140883 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:59:11.636732  140883 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:59:11.654496  140883 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:59:11.811000  140883 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:59:12.032082  140883 docker.go:234] disabling docker service ...
	I1119 22:59:12.032160  140883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:59:12.048543  140883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:59:12.064141  140883 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:59:12.225964  140883 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:59:12.368239  140883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:59:12.384716  140883 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:59:12.408056  140883 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 22:59:12.408120  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:12.421146  140883 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 22:59:12.421223  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:12.434510  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:12.447609  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:12.460732  140883 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:59:12.477217  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:12.489987  140883 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:12.511524  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:12.524517  140883 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:59:12.536463  140883 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1119 22:59:12.536536  140883 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1119 22:59:12.563021  140883 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:59:12.578130  140883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:59:12.729736  140883 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 22:59:12.855038  140883 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 22:59:12.855107  140883 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 22:59:12.860898  140883 start.go:564] Will wait 60s for crictl version
	I1119 22:59:12.860954  140883 ssh_runner.go:195] Run: which crictl
	I1119 22:59:12.865294  140883 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1119 22:59:12.912486  140883 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1119 22:59:12.912590  140883 ssh_runner.go:195] Run: crio --version
	I1119 22:59:12.943910  140883 ssh_runner.go:195] Run: crio --version
	I1119 22:59:12.976663  140883 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1119 22:59:12.980411  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:12.980805  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:12.980827  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:12.981017  140883 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1119 22:59:12.986162  140883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:59:13.003058  140883 kubeadm.go:884] updating cluster {Name:ha-487903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stor
ageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:59:13.003281  140883 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:59:13.003338  140883 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:59:13.047712  140883 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1119 22:59:13.047780  140883 ssh_runner.go:195] Run: which lz4
	I1119 22:59:13.052977  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1119 22:59:13.053081  140883 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1119 22:59:13.058671  140883 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1119 22:59:13.058708  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1119 22:59:14.697348  140883 crio.go:462] duration metric: took 1.644299269s to copy over tarball
	I1119 22:59:14.697449  140883 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1119 22:59:16.447188  140883 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.749702497s)
	I1119 22:59:16.447222  140883 crio.go:469] duration metric: took 1.749848336s to extract the tarball
	I1119 22:59:16.447231  140883 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1119 22:59:16.489289  140883 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:59:16.536108  140883 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:59:16.536132  140883 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:59:16.536140  140883 kubeadm.go:935] updating node { 192.168.39.15 8443 v1.34.1 crio true true} ...
	I1119 22:59:16.536265  140883 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-487903 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:59:16.536328  140883 ssh_runner.go:195] Run: crio config
	I1119 22:59:16.585135  140883 cni.go:84] Creating CNI manager for ""
	I1119 22:59:16.585158  140883 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1119 22:59:16.585181  140883 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:59:16.585202  140883 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.15 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-487903 NodeName:ha-487903 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:59:16.585355  140883 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-487903"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.15"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.15"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:59:16.585375  140883 kube-vip.go:115] generating kube-vip config ...
	I1119 22:59:16.585419  140883 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1119 22:59:16.615712  140883 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1119 22:59:16.615824  140883 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1119 22:59:16.615913  140883 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:59:16.633015  140883 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:59:16.633116  140883 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1119 22:59:16.646138  140883 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1119 22:59:16.668865  140883 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:59:16.691000  140883 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1119 22:59:16.713854  140883 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1119 22:59:16.736483  140883 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1119 22:59:16.741324  140883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:59:16.757055  140883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:59:16.900472  140883 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:59:16.922953  140883 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903 for IP: 192.168.39.15
	I1119 22:59:16.922982  140883 certs.go:195] generating shared ca certs ...
	I1119 22:59:16.922999  140883 certs.go:227] acquiring lock for ca certs: {Name:mk7fd1f1adfef6333505c39c1a982465562c820e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:59:16.923147  140883 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key
	I1119 22:59:16.923233  140883 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key
	I1119 22:59:16.923245  140883 certs.go:257] generating profile certs ...
	I1119 22:59:16.923340  140883 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key
	I1119 22:59:16.923369  140883 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key.4e6b2c30
	I1119 22:59:16.923388  140883 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt.4e6b2c30 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.15 192.168.39.191 192.168.39.160 192.168.39.254]
	I1119 22:59:17.222295  140883 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt.4e6b2c30 ...
	I1119 22:59:17.222330  140883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt.4e6b2c30: {Name:mk1efb8fb5e10ff1c6bc1bceec2ebc4b1a4cdce5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:59:17.222507  140883 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key.4e6b2c30 ...
	I1119 22:59:17.222521  140883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key.4e6b2c30: {Name:mk99b1381f2cff273ee01fc482a9705b00bd6fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:59:17.222598  140883 certs.go:382] copying /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt.4e6b2c30 -> /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt
	I1119 22:59:17.224167  140883 certs.go:386] copying /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key.4e6b2c30 -> /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key
	I1119 22:59:17.227659  140883 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key
	I1119 22:59:17.227687  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1119 22:59:17.227700  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1119 22:59:17.227711  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1119 22:59:17.227725  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1119 22:59:17.227746  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1119 22:59:17.227763  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1119 22:59:17.227778  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1119 22:59:17.227791  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1119 22:59:17.227853  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem (1338 bytes)
	W1119 22:59:17.227922  140883 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369_empty.pem, impossibly tiny 0 bytes
	I1119 22:59:17.227938  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 22:59:17.227968  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem (1078 bytes)
	I1119 22:59:17.228003  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:59:17.228035  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem (1675 bytes)
	I1119 22:59:17.228085  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 22:59:17.228122  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /usr/share/ca-certificates/1213692.pem
	I1119 22:59:17.228146  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:59:17.228164  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem -> /usr/share/ca-certificates/121369.pem
	I1119 22:59:17.228751  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:59:17.267457  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 22:59:17.301057  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:59:17.334334  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:59:17.369081  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 22:59:17.401525  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 22:59:17.435168  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:59:17.468258  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 22:59:17.501844  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /usr/share/ca-certificates/1213692.pem (1708 bytes)
	I1119 22:59:17.535729  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:59:17.568773  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem --> /usr/share/ca-certificates/121369.pem (1338 bytes)
	I1119 22:59:17.602167  140883 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:59:17.625050  140883 ssh_runner.go:195] Run: openssl version
	I1119 22:59:17.631971  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121369.pem && ln -fs /usr/share/ca-certificates/121369.pem /etc/ssl/certs/121369.pem"
	I1119 22:59:17.646313  140883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121369.pem
	I1119 22:59:17.652083  140883 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:54 /usr/share/ca-certificates/121369.pem
	I1119 22:59:17.652141  140883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121369.pem
	I1119 22:59:17.660153  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/121369.pem /etc/ssl/certs/51391683.0"
	I1119 22:59:17.675854  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1213692.pem && ln -fs /usr/share/ca-certificates/1213692.pem /etc/ssl/certs/1213692.pem"
	I1119 22:59:17.691421  140883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1213692.pem
	I1119 22:59:17.697623  140883 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:54 /usr/share/ca-certificates/1213692.pem
	I1119 22:59:17.697704  140883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1213692.pem
	I1119 22:59:17.706162  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1213692.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:59:17.721477  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:59:17.736953  140883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:59:17.743111  140883 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:59:17.743185  140883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:59:17.751321  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:59:17.766690  140883 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:59:17.773200  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 22:59:17.781700  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 22:59:17.790000  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 22:59:17.798411  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 22:59:17.807029  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 22:59:17.815374  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 22:59:17.823385  140883 kubeadm.go:401] StartCluster: {Name:ha-487903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Clust
erName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:59:17.823559  140883 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 22:59:17.823640  140883 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:59:17.865475  140883 cri.go:89] found id: ""
	I1119 22:59:17.865542  140883 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:59:17.879260  140883 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 22:59:17.879283  140883 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 22:59:17.879329  140883 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 22:59:17.892209  140883 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 22:59:17.892673  140883 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-487903" does not appear in /home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 22:59:17.892815  140883 kubeconfig.go:62] /home/jenkins/minikube-integration/21918-117497/kubeconfig needs updating (will repair): [kubeconfig missing "ha-487903" cluster setting kubeconfig missing "ha-487903" context setting]
	I1119 22:59:17.893101  140883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/kubeconfig: {Name:mk079b5c589536bd895a38d6eaf0adbccf891fc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:59:17.963155  140883 kapi.go:59] client config for ha-487903: &rest.Config{Host:"https://192.168.39.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key", CAFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1119 22:59:17.963616  140883 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1119 22:59:17.963633  140883 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1119 22:59:17.963638  140883 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1119 22:59:17.963643  140883 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1119 22:59:17.963647  140883 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1119 22:59:17.963661  140883 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1119 22:59:17.964167  140883 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 22:59:17.978087  140883 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.15
	I1119 22:59:17.978113  140883 kubeadm.go:602] duration metric: took 98.825282ms to restartPrimaryControlPlane
	I1119 22:59:17.978123  140883 kubeadm.go:403] duration metric: took 154.749827ms to StartCluster
	I1119 22:59:17.978140  140883 settings.go:142] acquiring lock: {Name:mk7bf46f049c1d627501587bc2954f8687f12cc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:59:17.978206  140883 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 22:59:17.978813  140883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/kubeconfig: {Name:mk079b5c589536bd895a38d6eaf0adbccf891fc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:59:18.091922  140883 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:59:18.091962  140883 start.go:242] waiting for startup goroutines ...
	I1119 22:59:18.091978  140883 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:59:18.092251  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:18.092350  140883 cache.go:107] acquiring lock: {Name:mk7073b8eca670d2c11dece9275947688dc7c859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:59:18.092431  140883 cache.go:115] /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1119 22:59:18.092446  140883 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 107.182ยตs
	I1119 22:59:18.092458  140883 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1119 22:59:18.092470  140883 cache.go:87] Successfully saved all images to host disk.
	I1119 22:59:18.092656  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:18.094431  140883 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:59:18.096799  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:18.097214  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:18.097237  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:18.097408  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:59:18.112697  140883 out.go:179] * Enabled addons: 
	I1119 22:59:18.214740  140883 crio.go:510] couldn't find preloaded image for "registry.k8s.io/pause:3.1". assuming images are not preloaded.
	I1119 22:59:18.214767  140883 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/pause:3.1]
	I1119 22:59:18.214826  140883 image.go:138] retrieving image: registry.k8s.io/pause:3.1
	I1119 22:59:18.216192  140883 image.go:181] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1119 22:59:18.232424  140883 addons.go:515] duration metric: took 140.445828ms for enable addons: enabled=[]
	I1119 22:59:18.373430  140883 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1119 22:59:18.424783  140883 cache_images.go:118] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1119 22:59:18.424834  140883 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1119 22:59:18.424902  140883 ssh_runner.go:195] Run: which crictl
	I1119 22:59:18.429834  140883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1119 22:59:18.468042  140883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1119 22:59:18.506995  140883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1119 22:59:18.543506  140883 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1119 22:59:18.543547  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 -> /var/lib/minikube/images/pause_3.1
	I1119 22:59:18.543609  140883 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.1
	I1119 22:59:18.549496  140883 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1119 22:59:18.549518  140883 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.1
	I1119 22:59:18.549580  140883 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1119 22:59:19.599140  140883 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.049525714s)
	I1119 22:59:19.599189  140883 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1119 22:59:19.599233  140883 cache_images.go:125] Successfully loaded all cached images
	I1119 22:59:19.599247  140883 cache_images.go:94] duration metric: took 1.384470883s to LoadCachedImages
	I1119 22:59:19.604407  140883 cache_images.go:264] succeeded pushing to: ha-487903
	I1119 22:59:19.604463  140883 start.go:247] waiting for cluster config update ...
	I1119 22:59:19.604477  140883 start.go:256] writing updated cluster config ...
	I1119 22:59:19.606572  140883 out.go:203] 
	I1119 22:59:19.608121  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:19.608254  140883 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 22:59:19.609863  140883 out.go:179] * Starting "ha-487903-m02" control-plane node in "ha-487903" cluster
	I1119 22:59:19.611047  140883 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:59:19.611074  140883 cache.go:65] Caching tarball of preloaded images
	I1119 22:59:19.611204  140883 preload.go:238] Found /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 22:59:19.611221  140883 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 22:59:19.611355  140883 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 22:59:19.611616  140883 start.go:360] acquireMachinesLock for ha-487903-m02: {Name:mk31ae761a53f3900bcf08a8c89eb1fc69e23fe3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1119 22:59:19.611685  140883 start.go:364] duration metric: took 40.215ยตs to acquireMachinesLock for "ha-487903-m02"
	I1119 22:59:19.611709  140883 start.go:96] Skipping create...Using existing machine configuration
	I1119 22:59:19.611717  140883 fix.go:54] fixHost starting: m02
	I1119 22:59:19.613410  140883 fix.go:112] recreateIfNeeded on ha-487903-m02: state=Stopped err=<nil>
	W1119 22:59:19.613431  140883 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 22:59:19.615098  140883 out.go:252] * Restarting existing kvm2 VM for "ha-487903-m02" ...
	I1119 22:59:19.615142  140883 main.go:143] libmachine: starting domain...
	I1119 22:59:19.615165  140883 main.go:143] libmachine: ensuring networks are active...
	I1119 22:59:19.616007  140883 main.go:143] libmachine: Ensuring network default is active
	I1119 22:59:19.616405  140883 main.go:143] libmachine: Ensuring network mk-ha-487903 is active
	I1119 22:59:19.616894  140883 main.go:143] libmachine: getting domain XML...
	I1119 22:59:19.618210  140883 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>ha-487903-m02</name>
	  <uuid>dcc51fc7-a2ff-40ae-988d-da36299d6bbc</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/ha-487903-m02.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:04:d5:70'/>
	      <source network='mk-ha-487903'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:9b:1d:f0'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1119 22:59:20.934173  140883 main.go:143] libmachine: waiting for domain to start...
	I1119 22:59:20.935608  140883 main.go:143] libmachine: domain is now running
	I1119 22:59:20.935632  140883 main.go:143] libmachine: waiting for IP...
	I1119 22:59:20.936409  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:20.936918  140883 main.go:143] libmachine: domain ha-487903-m02 has current primary IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:20.936934  140883 main.go:143] libmachine: found domain IP: 192.168.39.191
	I1119 22:59:20.936940  140883 main.go:143] libmachine: reserving static IP address...
	I1119 22:59:20.937407  140883 main.go:143] libmachine: found host DHCP lease matching {name: "ha-487903-m02", mac: "52:54:00:04:d5:70", ip: "192.168.39.191"} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:54:10 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:20.937433  140883 main.go:143] libmachine: skip adding static IP to network mk-ha-487903 - found existing host DHCP lease matching {name: "ha-487903-m02", mac: "52:54:00:04:d5:70", ip: "192.168.39.191"}
	I1119 22:59:20.937445  140883 main.go:143] libmachine: reserved static IP address 192.168.39.191 for domain ha-487903-m02
	I1119 22:59:20.937450  140883 main.go:143] libmachine: waiting for SSH...
	I1119 22:59:20.937455  140883 main.go:143] libmachine: Getting to WaitForSSH function...
	I1119 22:59:20.939837  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:20.940340  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:54:10 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:20.940366  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:20.940532  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:20.940720  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 22:59:20.940730  140883 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1119 22:59:24.012154  140883 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I1119 22:59:30.092142  140883 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I1119 22:59:33.096094  140883 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: connection refused
	I1119 22:59:36.206633  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:59:36.209899  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.210391  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:36.210411  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.210732  140883 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 22:59:36.211013  140883 machine.go:94] provisionDockerMachine start ...
	I1119 22:59:36.213527  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.213977  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:36.214002  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.214194  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:36.214405  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 22:59:36.214418  140883 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:59:36.326740  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1119 22:59:36.326781  140883 buildroot.go:166] provisioning hostname "ha-487903-m02"
	I1119 22:59:36.329446  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.329910  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:36.329941  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.330096  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:36.330305  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 22:59:36.330321  140883 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-487903-m02 && echo "ha-487903-m02" | sudo tee /etc/hostname
	I1119 22:59:36.457310  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-487903-m02
	
	I1119 22:59:36.460161  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.460619  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:36.460649  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.460898  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:36.461143  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 22:59:36.461164  140883 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-487903-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-487903-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-487903-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:59:36.581648  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:59:36.581677  140883 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21918-117497/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-117497/.minikube}
	I1119 22:59:36.581693  140883 buildroot.go:174] setting up certificates
	I1119 22:59:36.581705  140883 provision.go:84] configureAuth start
	I1119 22:59:36.585049  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.585711  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:36.585755  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.588067  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.588494  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:36.588521  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.588646  140883 provision.go:143] copyHostCerts
	I1119 22:59:36.588674  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 22:59:36.588706  140883 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem, removing ...
	I1119 22:59:36.588714  140883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 22:59:36.588769  140883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem (1675 bytes)
	I1119 22:59:36.588842  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 22:59:36.588860  140883 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem, removing ...
	I1119 22:59:36.588866  140883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 22:59:36.588903  140883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem (1078 bytes)
	I1119 22:59:36.589025  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 22:59:36.589050  140883 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem, removing ...
	I1119 22:59:36.589057  140883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 22:59:36.589079  140883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem (1123 bytes)
	I1119 22:59:36.589147  140883 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem org=jenkins.ha-487903-m02 san=[127.0.0.1 192.168.39.191 ha-487903-m02 localhost minikube]
	I1119 22:59:36.826031  140883 provision.go:177] copyRemoteCerts
	I1119 22:59:36.826092  140883 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:59:36.828610  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.829058  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:36.829082  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:36.829236  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 22:59:36.914853  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1119 22:59:36.914951  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 22:59:36.947443  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1119 22:59:36.947526  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1119 22:59:36.979006  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1119 22:59:36.979097  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 22:59:37.010933  140883 provision.go:87] duration metric: took 429.212672ms to configureAuth
	I1119 22:59:37.010966  140883 buildroot.go:189] setting minikube options for container-runtime
	I1119 22:59:37.011249  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:37.014321  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.014846  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:37.014890  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.015134  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:37.015408  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 22:59:37.015434  140883 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 22:59:37.258599  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 22:59:37.258634  140883 machine.go:97] duration metric: took 1.047602081s to provisionDockerMachine
	I1119 22:59:37.258649  140883 start.go:293] postStartSetup for "ha-487903-m02" (driver="kvm2")
	I1119 22:59:37.258662  140883 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:59:37.258718  140883 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:59:37.261730  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.262218  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:37.262247  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.262427  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 22:59:37.348416  140883 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:59:37.353564  140883 info.go:137] Remote host: Buildroot 2025.02
	I1119 22:59:37.353602  140883 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/addons for local assets ...
	I1119 22:59:37.353676  140883 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/files for local assets ...
	I1119 22:59:37.353750  140883 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> 1213692.pem in /etc/ssl/certs
	I1119 22:59:37.353760  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /etc/ssl/certs/1213692.pem
	I1119 22:59:37.353845  140883 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:59:37.366805  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 22:59:37.398923  140883 start.go:296] duration metric: took 140.253592ms for postStartSetup
	I1119 22:59:37.399023  140883 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1119 22:59:37.401945  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.402392  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:37.402417  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.402579  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 22:59:37.493849  140883 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I1119 22:59:37.493957  140883 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1119 22:59:37.556929  140883 fix.go:56] duration metric: took 17.945204618s for fixHost
	I1119 22:59:37.560155  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.560693  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:37.560730  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.560998  140883 main.go:143] libmachine: Using SSH client type: native
	I1119 22:59:37.561206  140883 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 22:59:37.561217  140883 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1119 22:59:37.681336  140883 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763593177.645225217
	
	I1119 22:59:37.681361  140883 fix.go:216] guest clock: 1763593177.645225217
	I1119 22:59:37.681369  140883 fix.go:229] Guest: 2025-11-19 22:59:37.645225217 +0000 UTC Remote: 2025-11-19 22:59:37.55695737 +0000 UTC m=+41.396645577 (delta=88.267847ms)
	I1119 22:59:37.681385  140883 fix.go:200] guest clock delta is within tolerance: 88.267847ms
	I1119 22:59:37.681391  140883 start.go:83] releasing machines lock for "ha-487903-m02", held for 18.069691628s
	I1119 22:59:37.684191  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.684592  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:37.684617  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.686578  140883 out.go:179] * Found network options:
	I1119 22:59:37.687756  140883 out.go:179]   - NO_PROXY=192.168.39.15
	W1119 22:59:37.688974  140883 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 22:59:37.689312  140883 proxy.go:120] fail to check proxy env: Error ip not in block
	I1119 22:59:37.689391  140883 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 22:59:37.689429  140883 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:59:37.692557  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.692674  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.693088  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:37.693119  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.693175  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:37.693199  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:37.693322  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 22:59:37.693519  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 22:59:37.917934  140883 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:59:37.925533  140883 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:59:37.925625  140883 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:59:37.946707  140883 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 22:59:37.946736  140883 start.go:496] detecting cgroup driver to use...
	I1119 22:59:37.946815  140883 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 22:59:37.972689  140883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 22:59:37.990963  140883 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:59:37.991033  140883 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:59:38.008725  140883 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:59:38.025289  140883 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:59:38.180260  140883 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:59:38.396498  140883 docker.go:234] disabling docker service ...
	I1119 22:59:38.396561  140883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:59:38.413974  140883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:59:38.429993  140883 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:59:38.600828  140883 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:59:38.743771  140883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:59:38.761006  140883 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:59:38.784784  140883 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 22:59:38.784849  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:38.797617  140883 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 22:59:38.797682  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:38.810823  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:38.824064  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:38.837310  140883 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:59:38.851169  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:38.864106  140883 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:38.884838  140883 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:59:38.897998  140883 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:59:38.909976  140883 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1119 22:59:38.910055  140883 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1119 22:59:38.933644  140883 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:59:38.947853  140883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:59:39.102667  140883 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 22:59:39.237328  140883 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 22:59:39.237425  140883 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 22:59:39.244051  140883 start.go:564] Will wait 60s for crictl version
	I1119 22:59:39.244122  140883 ssh_runner.go:195] Run: which crictl
	I1119 22:59:39.248522  140883 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1119 22:59:39.290126  140883 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1119 22:59:39.290249  140883 ssh_runner.go:195] Run: crio --version
	I1119 22:59:39.321443  140883 ssh_runner.go:195] Run: crio --version
	I1119 22:59:39.354869  140883 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1119 22:59:39.356230  140883 out.go:179]   - env NO_PROXY=192.168.39.15
	I1119 22:59:39.359840  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:39.360302  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:39.360329  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:39.360492  140883 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1119 22:59:39.365473  140883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:59:39.382234  140883 mustload.go:66] Loading cluster: ha-487903
	I1119 22:59:39.382499  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:39.384228  140883 host.go:66] Checking if "ha-487903" exists ...
	I1119 22:59:39.384424  140883 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903 for IP: 192.168.39.191
	I1119 22:59:39.384434  140883 certs.go:195] generating shared ca certs ...
	I1119 22:59:39.384461  140883 certs.go:227] acquiring lock for ca certs: {Name:mk7fd1f1adfef6333505c39c1a982465562c820e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:59:39.384590  140883 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key
	I1119 22:59:39.384635  140883 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key
	I1119 22:59:39.384645  140883 certs.go:257] generating profile certs ...
	I1119 22:59:39.384719  140883 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key
	I1119 22:59:39.384773  140883 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key.4e640f1f
	I1119 22:59:39.384805  140883 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key
	I1119 22:59:39.384819  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1119 22:59:39.384832  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1119 22:59:39.384842  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1119 22:59:39.384852  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1119 22:59:39.384862  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1119 22:59:39.384884  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1119 22:59:39.384898  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1119 22:59:39.384910  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1119 22:59:39.384960  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem (1338 bytes)
	W1119 22:59:39.384991  140883 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369_empty.pem, impossibly tiny 0 bytes
	I1119 22:59:39.385000  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 22:59:39.385020  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem (1078 bytes)
	I1119 22:59:39.385051  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:59:39.385085  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem (1675 bytes)
	I1119 22:59:39.385135  140883 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 22:59:39.385162  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem -> /usr/share/ca-certificates/121369.pem
	I1119 22:59:39.385175  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /usr/share/ca-certificates/1213692.pem
	I1119 22:59:39.385187  140883 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:59:39.387504  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:39.387909  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:39.387931  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:39.388082  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:59:39.466324  140883 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1119 22:59:39.472238  140883 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1119 22:59:39.488702  140883 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1119 22:59:39.494809  140883 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1119 22:59:39.508912  140883 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1119 22:59:39.514176  140883 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1119 22:59:39.528869  140883 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1119 22:59:39.534452  140883 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1119 22:59:39.547082  140883 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1119 22:59:39.552126  140883 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1119 22:59:39.565359  140883 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1119 22:59:39.570444  140883 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1119 22:59:39.583069  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:59:39.616063  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 22:59:39.649495  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:59:39.681835  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:59:39.717138  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 22:59:39.749821  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 22:59:39.782202  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:59:39.813910  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 22:59:39.846088  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem --> /usr/share/ca-certificates/121369.pem (1338 bytes)
	I1119 22:59:39.879164  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /usr/share/ca-certificates/1213692.pem (1708 bytes)
	I1119 22:59:39.911376  140883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:59:39.944000  140883 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1119 22:59:39.967306  140883 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1119 22:59:39.990350  140883 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1119 22:59:40.013301  140883 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1119 22:59:40.037246  140883 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1119 22:59:40.060715  140883 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1119 22:59:40.086564  140883 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1119 22:59:40.109633  140883 ssh_runner.go:195] Run: openssl version
	I1119 22:59:40.116643  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1213692.pem && ln -fs /usr/share/ca-certificates/1213692.pem /etc/ssl/certs/1213692.pem"
	I1119 22:59:40.130925  140883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1213692.pem
	I1119 22:59:40.136232  140883 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:54 /usr/share/ca-certificates/1213692.pem
	I1119 22:59:40.136304  140883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1213692.pem
	I1119 22:59:40.144024  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1213692.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:59:40.158532  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:59:40.172833  140883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:59:40.178370  140883 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:59:40.178436  140883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:59:40.186337  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:59:40.203312  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121369.pem && ln -fs /usr/share/ca-certificates/121369.pem /etc/ssl/certs/121369.pem"
	I1119 22:59:40.218899  140883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121369.pem
	I1119 22:59:40.224542  140883 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:54 /usr/share/ca-certificates/121369.pem
	I1119 22:59:40.224604  140883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121369.pem
	I1119 22:59:40.232341  140883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/121369.pem /etc/ssl/certs/51391683.0"
	I1119 22:59:40.246745  140883 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:59:40.252498  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 22:59:40.260177  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 22:59:40.267967  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 22:59:40.275670  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 22:59:40.282924  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 22:59:40.290564  140883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 22:59:40.297918  140883 kubeadm.go:935] updating node {m02 192.168.39.191 8443 v1.34.1 crio true true} ...
	I1119 22:59:40.298017  140883 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-487903-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.191
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:59:40.298041  140883 kube-vip.go:115] generating kube-vip config ...
	I1119 22:59:40.298079  140883 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1119 22:59:40.326946  140883 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1119 22:59:40.327021  140883 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1119 22:59:40.327086  140883 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:59:40.341513  140883 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:59:40.341602  140883 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1119 22:59:40.355326  140883 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1119 22:59:40.377667  140883 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:59:40.398672  140883 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1119 22:59:40.420213  140883 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1119 22:59:40.424583  140883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:59:40.440499  140883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:59:40.591016  140883 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:59:40.625379  140883 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:59:40.625713  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:40.625790  140883 cache.go:107] acquiring lock: {Name:mk7073b8eca670d2c11dece9275947688dc7c859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:59:40.625904  140883 cache.go:115] /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1119 22:59:40.625917  140883 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 136.601ยตs
	I1119 22:59:40.625925  140883 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1119 22:59:40.625932  140883 cache.go:87] Successfully saved all images to host disk.
	I1119 22:59:40.626121  140883 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:59:40.628127  140883 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:59:40.628799  140883 out.go:179] * Verifying Kubernetes components...
	I1119 22:59:40.630163  140883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:59:40.631018  140883 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:40.631564  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:59:40.631591  140883 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:59:40.631793  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:59:40.839364  140883 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:59:40.839885  140883 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:59:40.839904  140883 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:59:40.842028  140883 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:59:40.844857  140883 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:40.845326  140883 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 22:59:40.845355  140883 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 22:59:40.845505  140883 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 22:59:40.874910  140883 kapi.go:59] client config for ha-487903: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key", CAFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1119 22:59:40.875019  140883 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.15:8443
	I1119 22:59:40.875298  140883 node_ready.go:35] waiting up to 6m0s for node "ha-487903-m02" to be "Ready" ...
	I1119 22:59:41.009083  140883 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:59:41.009116  140883 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:59:41.012633  140883 cache_images.go:264] succeeded pushing to: ha-487903 ha-487903-m02
	W1119 22:59:42.877100  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 22:59:45.376962  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 22:59:47.876896  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 22:59:50.376621  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 22:59:52.876228  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 22:59:54.876713  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 22:59:57.376562  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 22:59:59.377033  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:01.876165  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:03.876776  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:06.376476  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:08.377072  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:10.876811  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:13.377002  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:15.876574  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:18.376735  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:20.876598  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:23.376586  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:25.876567  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:28.376801  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:30.876662  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:32.877033  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:35.376969  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:37.876252  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:39.876923  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:42.376913  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:44.876958  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:47.376642  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:49.376843  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:51.876453  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:54.376840  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:56.876275  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:00:58.876988  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:01.376794  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:03.876458  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:05.876958  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:08.377013  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:10.876929  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:13.376449  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:15.376525  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:17.376595  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:19.876563  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:22.376426  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:24.376713  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:26.376833  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:28.876508  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:30.876839  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:32.877099  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:35.377181  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:37.876444  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:39.877103  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:42.376270  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:44.376706  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:46.876968  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:49.376048  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:51.376320  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:53.376410  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:55.376492  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:57.876667  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:01:59.876765  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:02.376272  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:04.376314  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:06.376754  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:08.877042  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:11.376455  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:13.376516  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:15.376798  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:17.377028  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:19.876336  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:22.376568  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:24.876573  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:27.376833  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:29.876498  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:31.876574  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:34.376621  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:36.376848  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:38.877032  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:41.376174  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:43.377081  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:45.876182  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:48.376695  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:50.876795  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:52.876945  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:55.377185  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:57.876221  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:02:59.876403  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:01.876497  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:04.376656  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:06.376862  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:08.876459  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:11.376839  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:13.877025  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:16.376692  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:18.377137  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:20.876509  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:22.876756  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:25.377113  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:27.876224  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:29.876784  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:31.877072  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:34.376176  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:36.876300  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:38.876430  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:41.376746  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:43.876333  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:46.376700  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:48.376946  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:50.376983  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:52.876447  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:55.376396  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:03:57.876511  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:00.376395  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:02.376931  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:04.876111  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:06.876594  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:09.376360  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:11.377039  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:13.876739  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:15.876954  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:18.376383  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:20.376729  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:22.877074  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:25.376970  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:27.377128  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:29.876794  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:32.376301  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:34.876592  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:37.376959  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:39.876221  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:41.876361  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:43.876530  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:45.877025  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:48.376319  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:50.376838  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:52.376995  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:54.876470  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:56.876815  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:04:59.376319  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:01.376715  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:03.376967  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:05.876573  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:07.877236  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:10.376866  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:12.876650  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:15.376722  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:17.876558  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:20.376338  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:22.876342  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:25.376231  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:27.876076  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:30.376254  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:32.376759  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:34.876778  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:37.376154  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	W1119 23:05:39.377025  140883 node_ready.go:55] error getting node "ha-487903-m02" condition "Ready" status (will retry): Get "https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02": dial tcp 192.168.39.15:8443: connect: connection refused
	I1119 23:05:40.875796  140883 node_ready.go:38] duration metric: took 6m0.000466447s for node "ha-487903-m02" to be "Ready" ...
	I1119 23:05:40.877762  140883 out.go:203] 
	W1119 23:05:40.878901  140883 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1119 23:05:40.878921  140883 out.go:285] * 
	W1119 23:05:40.880854  140883 out.go:308] โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
	โ”‚                                                                                             โ”‚
	โ”‚    * If the above advice does not help, please let us know:                                 โ”‚
	โ”‚      https://github.com/kubernetes/minikube/issues/new/choose                               โ”‚
	โ”‚                                                                                             โ”‚
	โ”‚    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    โ”‚
	โ”‚                                                                                             โ”‚
	โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
	I1119 23:05:40.882387  140883 out.go:203] 
	
	
	==> CRI-O <==
	Nov 19 23:05:44 ha-487903 crio[1097]: time="2025-11-19 23:05:44.630744964Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763593544630716792,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147868,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=742a24bb-03db-4b48-87c3-8562ebde09d4 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:05:44 ha-487903 crio[1097]: time="2025-11-19 23:05:44.631291255Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7fe06379-674b-43ff-afaf-0809d8eb6ee4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:44 ha-487903 crio[1097]: time="2025-11-19 23:05:44.631372778Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7fe06379-674b-43ff-afaf-0809d8eb6ee4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:44 ha-487903 crio[1097]: time="2025-11-19 23:05:44.631425650Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b7e08202a351cd56ba9649d7ab5ca6f63a8642b11fbaa795510df9b62430ab9,PodSandboxId:4ac5d7afe9e29dd583edf8963dd85b20c4a56a64ecff695db46f457ad0225861,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce5ff7916975f8df7d464b4e56a967792442b9e79944fd74684cc749b281df38,State:CONTAINER_RUNNING,CreatedAt:1763593161042894056,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2660d05a38d7f409ed63a1278c85d94,},Annotations:map[string]string{io.kubernetes.container.hash: 7c465d42,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7fe06379-674b-43ff-afaf-0809d8eb6ee4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:44 ha-487903 crio[1097]: time="2025-11-19 23:05:44.668950631Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=671bc327-258a-4113-a2ac-ac9bf116999e name=/runtime.v1.RuntimeService/Version
	Nov 19 23:05:44 ha-487903 crio[1097]: time="2025-11-19 23:05:44.669021406Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=671bc327-258a-4113-a2ac-ac9bf116999e name=/runtime.v1.RuntimeService/Version
	Nov 19 23:05:44 ha-487903 crio[1097]: time="2025-11-19 23:05:44.670363149Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=13be969a-6de3-45c2-8ae8-3381f8fadf48 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:05:44 ha-487903 crio[1097]: time="2025-11-19 23:05:44.670929372Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763593544670903715,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147868,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=13be969a-6de3-45c2-8ae8-3381f8fadf48 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:05:44 ha-487903 crio[1097]: time="2025-11-19 23:05:44.671481511Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aaf8f7ae-eb4c-415f-a972-8e984a89bbac name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:44 ha-487903 crio[1097]: time="2025-11-19 23:05:44.671531807Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aaf8f7ae-eb4c-415f-a972-8e984a89bbac name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:44 ha-487903 crio[1097]: time="2025-11-19 23:05:44.671620944Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b7e08202a351cd56ba9649d7ab5ca6f63a8642b11fbaa795510df9b62430ab9,PodSandboxId:4ac5d7afe9e29dd583edf8963dd85b20c4a56a64ecff695db46f457ad0225861,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce5ff7916975f8df7d464b4e56a967792442b9e79944fd74684cc749b281df38,State:CONTAINER_RUNNING,CreatedAt:1763593161042894056,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2660d05a38d7f409ed63a1278c85d94,},Annotations:map[string]string{io.kubernetes.container.hash: 7c465d42,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aaf8f7ae-eb4c-415f-a972-8e984a89bbac name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:44 ha-487903 crio[1097]: time="2025-11-19 23:05:44.709343012Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=888b234a-0e8b-4e6f-b9c1-e6727460ccf3 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:05:44 ha-487903 crio[1097]: time="2025-11-19 23:05:44.709433991Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=888b234a-0e8b-4e6f-b9c1-e6727460ccf3 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:05:44 ha-487903 crio[1097]: time="2025-11-19 23:05:44.711337476Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ad08d136-b4b8-4c14-8b46-99be84914bbd name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:05:44 ha-487903 crio[1097]: time="2025-11-19 23:05:44.711836480Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763593544711813858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147868,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ad08d136-b4b8-4c14-8b46-99be84914bbd name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:05:44 ha-487903 crio[1097]: time="2025-11-19 23:05:44.712486756Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8250a9f0-5427-4118-9961-351aa865635c name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:44 ha-487903 crio[1097]: time="2025-11-19 23:05:44.712562429Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8250a9f0-5427-4118-9961-351aa865635c name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:44 ha-487903 crio[1097]: time="2025-11-19 23:05:44.712622977Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b7e08202a351cd56ba9649d7ab5ca6f63a8642b11fbaa795510df9b62430ab9,PodSandboxId:4ac5d7afe9e29dd583edf8963dd85b20c4a56a64ecff695db46f457ad0225861,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce5ff7916975f8df7d464b4e56a967792442b9e79944fd74684cc749b281df38,State:CONTAINER_RUNNING,CreatedAt:1763593161042894056,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2660d05a38d7f409ed63a1278c85d94,},Annotations:map[string]string{io.kubernetes.container.hash: 7c465d42,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8250a9f0-5427-4118-9961-351aa865635c name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:44 ha-487903 crio[1097]: time="2025-11-19 23:05:44.756466459Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3aedf6cb-4ed4-43c9-81bb-9ddfa352e658 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:05:44 ha-487903 crio[1097]: time="2025-11-19 23:05:44.756673494Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3aedf6cb-4ed4-43c9-81bb-9ddfa352e658 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:05:44 ha-487903 crio[1097]: time="2025-11-19 23:05:44.758723727Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f3e5842d-6d13-49c2-ad8a-ce144bbfb6d0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:05:44 ha-487903 crio[1097]: time="2025-11-19 23:05:44.759275400Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763593544759246722,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147868,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f3e5842d-6d13-49c2-ad8a-ce144bbfb6d0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:05:44 ha-487903 crio[1097]: time="2025-11-19 23:05:44.760228649Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a604be4-0cbc-420b-9be8-3e9bc478512f name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:44 ha-487903 crio[1097]: time="2025-11-19 23:05:44.760389211Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5a604be4-0cbc-420b-9be8-3e9bc478512f name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:05:44 ha-487903 crio[1097]: time="2025-11-19 23:05:44.760601032Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b7e08202a351cd56ba9649d7ab5ca6f63a8642b11fbaa795510df9b62430ab9,PodSandboxId:4ac5d7afe9e29dd583edf8963dd85b20c4a56a64ecff695db46f457ad0225861,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce5ff7916975f8df7d464b4e56a967792442b9e79944fd74684cc749b281df38,State:CONTAINER_RUNNING,CreatedAt:1763593161042894056,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2660d05a38d7f409ed63a1278c85d94,},Annotations:map[string]string{io.kubernetes.container.hash: 7c465d42,io.kubernetes.container.restartCount: 0,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5a604be4-0cbc-420b-9be8-3e9bc478512f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	6b7e08202a351       ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178   6 minutes ago       Running             kube-vip            0                   4ac5d7afe9e29       kube-vip-ha-487903
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1119 23:05:44.912271    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E1119 23:05:44.912533    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E1119 23:05:44.914099    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E1119 23:05:44.914604    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E1119 23:05:44.916088    1970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Nov19 22:58] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000000] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[Nov19 22:59] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002202] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.908092] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.107363] kauditd_printk_skb: 88 callbacks suppressed
	[  +0.029379] kauditd_printk_skb: 142 callbacks suppressed
	
	
	==> kernel <==
	 23:05:44 up 6 min,  0 users,  load average: 0.08, 0.07, 0.05
	Linux ha-487903 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 21:15:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kubelet <==
	Nov 19 23:05:42 ha-487903 kubelet[1241]: E1119 23:05:42.573815    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:42 ha-487903 kubelet[1241]: E1119 23:05:42.675244    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:42 ha-487903 kubelet[1241]: E1119 23:05:42.776288    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:42 ha-487903 kubelet[1241]: E1119 23:05:42.877583    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:42 ha-487903 kubelet[1241]: E1119 23:05:42.978249    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:43 ha-487903 kubelet[1241]: E1119 23:05:43.079483    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:43 ha-487903 kubelet[1241]: E1119 23:05:43.181423    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:43 ha-487903 kubelet[1241]: E1119 23:05:43.282813    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:43 ha-487903 kubelet[1241]: E1119 23:05:43.309587    1241 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.39.15:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.15:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Nov 19 23:05:43 ha-487903 kubelet[1241]: E1119 23:05:43.384455    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:43 ha-487903 kubelet[1241]: E1119 23:05:43.485494    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:43 ha-487903 kubelet[1241]: E1119 23:05:43.586565    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:43 ha-487903 kubelet[1241]: E1119 23:05:43.687557    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:43 ha-487903 kubelet[1241]: E1119 23:05:43.788943    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:43 ha-487903 kubelet[1241]: E1119 23:05:43.890798    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:43 ha-487903 kubelet[1241]: E1119 23:05:43.992391    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:44 ha-487903 kubelet[1241]: E1119 23:05:44.092959    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:44 ha-487903 kubelet[1241]: E1119 23:05:44.193987    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:44 ha-487903 kubelet[1241]: E1119 23:05:44.294657    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:44 ha-487903 kubelet[1241]: E1119 23:05:44.396597    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:44 ha-487903 kubelet[1241]: E1119 23:05:44.497270    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:44 ha-487903 kubelet[1241]: E1119 23:05:44.598916    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:44 ha-487903 kubelet[1241]: E1119 23:05:44.700445    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:44 ha-487903 kubelet[1241]: E1119 23:05:44.801648    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	Nov 19 23:05:44 ha-487903 kubelet[1241]: E1119 23:05:44.902869    1241 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.39.15:8443/api/v1/nodes/ha-487903\": dial tcp 192.168.39.15:8443: connect: connection refused"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-487903 -n ha-487903
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-487903 -n ha-487903: exit status 2 (205.075905ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-487903" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (7.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-487903 stop --alsologtostderr -v 5: (7.263828845s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-487903 status --alsologtostderr -v 5: exit status 7 (65.117634ms)

                                                
                                                
-- stdout --
	ha-487903
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-487903-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-487903-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-487903-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 23:05:52.574428  142712 out.go:360] Setting OutFile to fd 1 ...
	I1119 23:05:52.574694  142712 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:05:52.574703  142712 out.go:374] Setting ErrFile to fd 2...
	I1119 23:05:52.574707  142712 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:05:52.574907  142712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-117497/.minikube/bin
	I1119 23:05:52.575085  142712 out.go:368] Setting JSON to false
	I1119 23:05:52.575118  142712 mustload.go:66] Loading cluster: ha-487903
	I1119 23:05:52.575207  142712 notify.go:221] Checking for updates...
	I1119 23:05:52.575544  142712 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:05:52.575562  142712 status.go:174] checking status of ha-487903 ...
	I1119 23:05:52.577841  142712 status.go:371] ha-487903 host status = "Stopped" (err=<nil>)
	I1119 23:05:52.577858  142712 status.go:384] host is not running, skipping remaining checks
	I1119 23:05:52.577863  142712 status.go:176] ha-487903 status: &{Name:ha-487903 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 23:05:52.577893  142712 status.go:174] checking status of ha-487903-m02 ...
	I1119 23:05:52.579147  142712 status.go:371] ha-487903-m02 host status = "Stopped" (err=<nil>)
	I1119 23:05:52.579161  142712 status.go:384] host is not running, skipping remaining checks
	I1119 23:05:52.579165  142712 status.go:176] ha-487903-m02 status: &{Name:ha-487903-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 23:05:52.579177  142712 status.go:174] checking status of ha-487903-m03 ...
	I1119 23:05:52.580372  142712 status.go:371] ha-487903-m03 host status = "Stopped" (err=<nil>)
	I1119 23:05:52.580387  142712 status.go:384] host is not running, skipping remaining checks
	I1119 23:05:52.580390  142712 status.go:176] ha-487903-m03 status: &{Name:ha-487903-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 23:05:52.580401  142712 status.go:174] checking status of ha-487903-m04 ...
	I1119 23:05:52.581601  142712 status.go:371] ha-487903-m04 host status = "Stopped" (err=<nil>)
	I1119 23:05:52.581615  142712 status.go:384] host is not running, skipping remaining checks
	I1119 23:05:52.581619  142712 status.go:176] ha-487903-m04 status: &{Name:ha-487903-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-487903 status --alsologtostderr -v 5": ha-487903
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-487903-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-487903-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-487903-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-487903 status --alsologtostderr -v 5": ha-487903
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-487903-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-487903-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-487903-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-487903 status --alsologtostderr -v 5": ha-487903
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-487903-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-487903-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-487903-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-487903 -n ha-487903
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-487903 -n ha-487903: exit status 7 (64.579068ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 7 (may be ok)
helpers_test.go:249: "ha-487903" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (7.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (120.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-487903 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m57.002615473s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 status --alsologtostderr -v 5
ha_test.go:573: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-487903 status --alsologtostderr -v 5": ha-487903
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-487903-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-487903-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-487903-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:576: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-487903 status --alsologtostderr -v 5": ha-487903
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-487903-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-487903-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-487903-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:579: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-487903 status --alsologtostderr -v 5": ha-487903
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-487903-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-487903-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-487903-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:582: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-487903 status --alsologtostderr -v 5": ha-487903
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-487903-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-487903-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-487903-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:599: expected 3 nodes Ready status to be True, got 
-- stdout --
	' True
	 True
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-487903 -n ha-487903
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-487903 logs -n 25: (1.860626199s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
	โ”‚ COMMAND โ”‚                                                                ARGS                                                                 โ”‚  PROFILE  โ”‚  USER   โ”‚ VERSION โ”‚     START TIME      โ”‚      END TIME       โ”‚
	โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m03:/home/docker/cp-test.txt ha-487903-m04:/home/docker/cp-test_ha-487903-m03_ha-487903-m04.txt              โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m03 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test_ha-487903-m03_ha-487903-m04.txt                                        โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp testdata/cp-test.txt ha-487903-m04:/home/docker/cp-test.txt                                                            โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile651617511/001/cp-test_ha-487903-m04.txt โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m04:/home/docker/cp-test.txt ha-487903:/home/docker/cp-test_ha-487903-m04_ha-487903.txt                      โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903 sudo cat /home/docker/cp-test_ha-487903-m04_ha-487903.txt                                                โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m04:/home/docker/cp-test.txt ha-487903-m02:/home/docker/cp-test_ha-487903-m04_ha-487903-m02.txt              โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m02 sudo cat /home/docker/cp-test_ha-487903-m04_ha-487903-m02.txt                                        โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m04:/home/docker/cp-test.txt ha-487903-m03:/home/docker/cp-test_ha-487903-m04_ha-487903-m03.txt              โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m03 sudo cat /home/docker/cp-test_ha-487903-m04_ha-487903-m03.txt                                        โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ node    โ”‚ ha-487903 node stop m02 --alsologtostderr -v 5                                                                                      โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:53 UTC โ”‚
	โ”‚ node    โ”‚ ha-487903 node start m02 --alsologtostderr -v 5                                                                                     โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:53 UTC โ”‚ 19 Nov 25 22:54 UTC โ”‚
	โ”‚ node    โ”‚ ha-487903 node list --alsologtostderr -v 5                                                                                          โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:54 UTC โ”‚                     โ”‚
	โ”‚ stop    โ”‚ ha-487903 stop --alsologtostderr -v 5                                                                                               โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:54 UTC โ”‚ 19 Nov 25 22:58 UTC โ”‚
	โ”‚ start   โ”‚ ha-487903 start --wait true --alsologtostderr -v 5                                                                                  โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:58 UTC โ”‚                     โ”‚
	โ”‚ node    โ”‚ ha-487903 node list --alsologtostderr -v 5                                                                                          โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:05 UTC โ”‚                     โ”‚
	โ”‚ node    โ”‚ ha-487903 node delete m03 --alsologtostderr -v 5                                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:05 UTC โ”‚                     โ”‚
	โ”‚ stop    โ”‚ ha-487903 stop --alsologtostderr -v 5                                                                                               โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:05 UTC โ”‚ 19 Nov 25 23:05 UTC โ”‚
	โ”‚ start   โ”‚ ha-487903 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio                                          โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:05 UTC โ”‚ 19 Nov 25 23:07 UTC โ”‚
	โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 23:05:52
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 23:05:52.706176  142733 out.go:360] Setting OutFile to fd 1 ...
	I1119 23:05:52.706327  142733 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:05:52.706339  142733 out.go:374] Setting ErrFile to fd 2...
	I1119 23:05:52.706345  142733 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:05:52.706585  142733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-117497/.minikube/bin
	I1119 23:05:52.707065  142733 out.go:368] Setting JSON to false
	I1119 23:05:52.708054  142733 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":17300,"bootTime":1763576253,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 23:05:52.708149  142733 start.go:143] virtualization: kvm guest
	I1119 23:05:52.710481  142733 out.go:179] * [ha-487903] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 23:05:52.712209  142733 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 23:05:52.712212  142733 notify.go:221] Checking for updates...
	I1119 23:05:52.713784  142733 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 23:05:52.715651  142733 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 23:05:52.717169  142733 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-117497/.minikube
	I1119 23:05:52.718570  142733 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 23:05:52.719907  142733 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 23:05:52.721783  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:05:52.722291  142733 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 23:05:52.757619  142733 out.go:179] * Using the kvm2 driver based on existing profile
	I1119 23:05:52.759046  142733 start.go:309] selected driver: kvm2
	I1119 23:05:52.759059  142733 start.go:930] validating driver "kvm2" against &{Name:ha-487903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:fal
se default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 23:05:52.759205  142733 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 23:05:52.760143  142733 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 23:05:52.760174  142733 cni.go:84] Creating CNI manager for ""
	I1119 23:05:52.760222  142733 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1119 23:05:52.760262  142733 start.go:353] cluster config:
	{Name:ha-487903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:f
alse headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 23:05:52.760375  142733 iso.go:125] acquiring lock: {Name:mk95e5a645bfae75190ef550e02bd4f48b331040 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 23:05:52.762211  142733 out.go:179] * Starting "ha-487903" primary control-plane node in "ha-487903" cluster
	I1119 23:05:52.763538  142733 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:05:52.763567  142733 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 23:05:52.763575  142733 cache.go:65] Caching tarball of preloaded images
	I1119 23:05:52.763673  142733 preload.go:238] Found /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 23:05:52.763683  142733 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 23:05:52.763787  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:05:52.763996  142733 start.go:360] acquireMachinesLock for ha-487903: {Name:mk31ae761a53f3900bcf08a8c89eb1fc69e23fe3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1119 23:05:52.764045  142733 start.go:364] duration metric: took 30.713ยตs to acquireMachinesLock for "ha-487903"
	I1119 23:05:52.764058  142733 start.go:96] Skipping create...Using existing machine configuration
	I1119 23:05:52.764066  142733 fix.go:54] fixHost starting: 
	I1119 23:05:52.765697  142733 fix.go:112] recreateIfNeeded on ha-487903: state=Stopped err=<nil>
	W1119 23:05:52.765728  142733 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 23:05:52.767327  142733 out.go:252] * Restarting existing kvm2 VM for "ha-487903" ...
	I1119 23:05:52.767364  142733 main.go:143] libmachine: starting domain...
	I1119 23:05:52.767374  142733 main.go:143] libmachine: ensuring networks are active...
	I1119 23:05:52.768372  142733 main.go:143] libmachine: Ensuring network default is active
	I1119 23:05:52.768788  142733 main.go:143] libmachine: Ensuring network mk-ha-487903 is active
	I1119 23:05:52.769282  142733 main.go:143] libmachine: getting domain XML...
	I1119 23:05:52.770421  142733 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>ha-487903</name>
	  <uuid>a1ad91e9-9cee-4f2a-89ce-da034e4410c0</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/ha-487903.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:a9:81:53'/>
	      <source network='mk-ha-487903'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:93:d5:3e'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1119 23:05:54.042651  142733 main.go:143] libmachine: waiting for domain to start...
	I1119 23:05:54.044244  142733 main.go:143] libmachine: domain is now running
	I1119 23:05:54.044267  142733 main.go:143] libmachine: waiting for IP...
	I1119 23:05:54.045198  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:05:54.045704  142733 main.go:143] libmachine: domain ha-487903 has current primary IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:05:54.045724  142733 main.go:143] libmachine: found domain IP: 192.168.39.15
	I1119 23:05:54.045732  142733 main.go:143] libmachine: reserving static IP address...
	I1119 23:05:54.046222  142733 main.go:143] libmachine: found host DHCP lease matching {name: "ha-487903", mac: "52:54:00:a9:81:53", ip: "192.168.39.15"} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:05:54.046258  142733 main.go:143] libmachine: skip adding static IP to network mk-ha-487903 - found existing host DHCP lease matching {name: "ha-487903", mac: "52:54:00:a9:81:53", ip: "192.168.39.15"}
	I1119 23:05:54.046271  142733 main.go:143] libmachine: reserved static IP address 192.168.39.15 for domain ha-487903
	I1119 23:05:54.046295  142733 main.go:143] libmachine: waiting for SSH...
	I1119 23:05:54.046303  142733 main.go:143] libmachine: Getting to WaitForSSH function...
	I1119 23:05:54.048860  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:05:54.049341  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:05:54.049374  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:05:54.049568  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:05:54.049870  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 23:05:54.049901  142733 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1119 23:05:57.100181  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.15:22: connect: no route to host
	I1119 23:06:03.180312  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.15:22: connect: no route to host
	I1119 23:06:06.296535  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:06:06.299953  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.300441  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:06.300473  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.300784  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:06:06.301022  142733 machine.go:94] provisionDockerMachine start ...
	I1119 23:06:06.303559  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.303988  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:06.304019  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.304170  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:06.304355  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 23:06:06.304365  142733 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 23:06:06.427246  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1119 23:06:06.427299  142733 buildroot.go:166] provisioning hostname "ha-487903"
	I1119 23:06:06.430382  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.430835  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:06.430864  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.431166  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:06.431461  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 23:06:06.431480  142733 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-487903 && echo "ha-487903" | sudo tee /etc/hostname
	I1119 23:06:06.561698  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-487903
	
	I1119 23:06:06.564714  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.565207  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:06.565235  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.565469  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:06.565702  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 23:06:06.565719  142733 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-487903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-487903/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-487903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 23:06:06.681480  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:06:06.681508  142733 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21918-117497/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-117497/.minikube}
	I1119 23:06:06.681543  142733 buildroot.go:174] setting up certificates
	I1119 23:06:06.681552  142733 provision.go:84] configureAuth start
	I1119 23:06:06.685338  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.685816  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:06.685842  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.688699  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.689140  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:06.689164  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.689319  142733 provision.go:143] copyHostCerts
	I1119 23:06:06.689357  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:06:06.689414  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem, removing ...
	I1119 23:06:06.689445  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:06:06.689527  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem (1078 bytes)
	I1119 23:06:06.689624  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:06:06.689643  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem, removing ...
	I1119 23:06:06.689649  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:06:06.689677  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem (1123 bytes)
	I1119 23:06:06.689736  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:06:06.689753  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem, removing ...
	I1119 23:06:06.689759  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:06:06.689781  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem (1675 bytes)
	I1119 23:06:06.689843  142733 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem org=jenkins.ha-487903 san=[127.0.0.1 192.168.39.15 ha-487903 localhost minikube]
	I1119 23:06:07.018507  142733 provision.go:177] copyRemoteCerts
	I1119 23:06:07.018578  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 23:06:07.021615  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.022141  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:07.022166  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.022358  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:06:07.124817  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1119 23:06:07.124927  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 23:06:07.158158  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1119 23:06:07.158263  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1119 23:06:07.190088  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1119 23:06:07.190169  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 23:06:07.222689  142733 provision.go:87] duration metric: took 541.123395ms to configureAuth
	I1119 23:06:07.222718  142733 buildroot.go:189] setting minikube options for container-runtime
	I1119 23:06:07.222970  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:07.226056  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.226580  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:07.226611  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.226826  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:07.227127  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 23:06:07.227155  142733 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 23:06:07.467444  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 23:06:07.467474  142733 machine.go:97] duration metric: took 1.166437022s to provisionDockerMachine
	I1119 23:06:07.467487  142733 start.go:293] postStartSetup for "ha-487903" (driver="kvm2")
	I1119 23:06:07.467497  142733 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 23:06:07.467573  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 23:06:07.470835  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.471406  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:07.471439  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.471649  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:06:07.557470  142733 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 23:06:07.562862  142733 info.go:137] Remote host: Buildroot 2025.02
	I1119 23:06:07.562927  142733 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/addons for local assets ...
	I1119 23:06:07.563034  142733 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/files for local assets ...
	I1119 23:06:07.563138  142733 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> 1213692.pem in /etc/ssl/certs
	I1119 23:06:07.563154  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /etc/ssl/certs/1213692.pem
	I1119 23:06:07.563287  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 23:06:07.576076  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:06:07.609515  142733 start.go:296] duration metric: took 142.008328ms for postStartSetup
	I1119 23:06:07.609630  142733 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1119 23:06:07.612430  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.612824  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:07.612846  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.613026  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:06:07.696390  142733 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I1119 23:06:07.696457  142733 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1119 23:06:07.760325  142733 fix.go:56] duration metric: took 14.99624586s for fixHost
	I1119 23:06:07.763696  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.764319  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:07.764358  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.764614  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:07.764948  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 23:06:07.764966  142733 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1119 23:06:07.879861  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763593567.838594342
	
	I1119 23:06:07.879914  142733 fix.go:216] guest clock: 1763593567.838594342
	I1119 23:06:07.879939  142733 fix.go:229] Guest: 2025-11-19 23:06:07.838594342 +0000 UTC Remote: 2025-11-19 23:06:07.760362222 +0000 UTC m=+15.104606371 (delta=78.23212ms)
	I1119 23:06:07.879965  142733 fix.go:200] guest clock delta is within tolerance: 78.23212ms
	I1119 23:06:07.879974  142733 start.go:83] releasing machines lock for "ha-487903", held for 15.115918319s
	I1119 23:06:07.882904  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.883336  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:07.883370  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.883966  142733 ssh_runner.go:195] Run: cat /version.json
	I1119 23:06:07.884051  142733 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 23:06:07.887096  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.887222  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.887583  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:07.887617  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.887792  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:07.887817  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.887816  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:06:07.888042  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:06:08.000713  142733 ssh_runner.go:195] Run: systemctl --version
	I1119 23:06:08.008530  142733 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 23:06:08.160324  142733 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 23:06:08.168067  142733 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 23:06:08.168152  142733 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 23:06:08.191266  142733 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 23:06:08.191300  142733 start.go:496] detecting cgroup driver to use...
	I1119 23:06:08.191379  142733 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 23:06:08.213137  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 23:06:08.230996  142733 docker.go:218] disabling cri-docker service (if available) ...
	I1119 23:06:08.231095  142733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 23:06:08.249013  142733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 23:06:08.265981  142733 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 23:06:08.414758  142733 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 23:06:08.622121  142733 docker.go:234] disabling docker service ...
	I1119 23:06:08.622209  142733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 23:06:08.639636  142733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 23:06:08.655102  142733 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 23:06:08.816483  142733 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 23:06:08.968104  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 23:06:08.984576  142733 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 23:06:09.008691  142733 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 23:06:09.008781  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:09.022146  142733 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 23:06:09.022232  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:09.035596  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:09.049670  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:09.063126  142733 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 23:06:09.077541  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:09.091115  142733 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:09.112968  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:09.126168  142733 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 23:06:09.137702  142733 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1119 23:06:09.137765  142733 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1119 23:06:09.176751  142733 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 23:06:09.191238  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:06:09.335526  142733 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 23:06:09.473011  142733 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 23:06:09.473116  142733 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 23:06:09.479113  142733 start.go:564] Will wait 60s for crictl version
	I1119 23:06:09.479189  142733 ssh_runner.go:195] Run: which crictl
	I1119 23:06:09.483647  142733 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1119 23:06:09.528056  142733 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1119 23:06:09.528131  142733 ssh_runner.go:195] Run: crio --version
	I1119 23:06:09.559995  142733 ssh_runner.go:195] Run: crio --version
	I1119 23:06:09.592672  142733 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1119 23:06:09.597124  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:09.597564  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:09.597590  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:09.597778  142733 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1119 23:06:09.602913  142733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:06:09.620048  142733 kubeadm.go:884] updating cluster {Name:ha-487903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stor
ageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 23:06:09.620196  142733 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:06:09.620243  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:06:09.674254  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:06:09.674279  142733 crio.go:433] Images already preloaded, skipping extraction
	I1119 23:06:09.674328  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:06:09.712016  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:06:09.712041  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:06:09.712058  142733 kubeadm.go:935] updating node { 192.168.39.15 8443 v1.34.1 crio true true} ...
	I1119 23:06:09.712184  142733 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-487903 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 23:06:09.712274  142733 ssh_runner.go:195] Run: crio config
	I1119 23:06:09.768708  142733 cni.go:84] Creating CNI manager for ""
	I1119 23:06:09.768732  142733 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1119 23:06:09.768752  142733 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 23:06:09.768773  142733 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.15 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-487903 NodeName:ha-487903 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 23:06:09.768939  142733 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-487903"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.15"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.15"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 23:06:09.768965  142733 kube-vip.go:115] generating kube-vip config ...
	I1119 23:06:09.769018  142733 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1119 23:06:09.795571  142733 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1119 23:06:09.795712  142733 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1119 23:06:09.795795  142733 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 23:06:09.812915  142733 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 23:06:09.812990  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1119 23:06:09.827102  142733 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1119 23:06:09.850609  142733 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 23:06:09.873695  142733 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1119 23:06:09.898415  142733 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1119 23:06:09.921905  142733 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1119 23:06:09.927238  142733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:06:09.944650  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:06:10.092858  142733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:06:10.131346  142733 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903 for IP: 192.168.39.15
	I1119 23:06:10.131374  142733 certs.go:195] generating shared ca certs ...
	I1119 23:06:10.131396  142733 certs.go:227] acquiring lock for ca certs: {Name:mk7fd1f1adfef6333505c39c1a982465562c820e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:06:10.131585  142733 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key
	I1119 23:06:10.131628  142733 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key
	I1119 23:06:10.131638  142733 certs.go:257] generating profile certs ...
	I1119 23:06:10.131709  142733 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key
	I1119 23:06:10.131766  142733 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key.4e6b2c30
	I1119 23:06:10.131799  142733 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key
	I1119 23:06:10.131811  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1119 23:06:10.131823  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1119 23:06:10.131835  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1119 23:06:10.131844  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1119 23:06:10.131857  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1119 23:06:10.131867  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1119 23:06:10.131905  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1119 23:06:10.131923  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1119 23:06:10.131976  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem (1338 bytes)
	W1119 23:06:10.132017  142733 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369_empty.pem, impossibly tiny 0 bytes
	I1119 23:06:10.132030  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 23:06:10.132063  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem (1078 bytes)
	I1119 23:06:10.132120  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem (1123 bytes)
	I1119 23:06:10.132148  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem (1675 bytes)
	I1119 23:06:10.132194  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:06:10.132221  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:06:10.132233  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem -> /usr/share/ca-certificates/121369.pem
	I1119 23:06:10.132244  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /usr/share/ca-certificates/1213692.pem
	I1119 23:06:10.132912  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 23:06:10.173830  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 23:06:10.215892  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 23:06:10.259103  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 23:06:10.294759  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 23:06:10.334934  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 23:06:10.388220  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 23:06:10.446365  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 23:06:10.481746  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 23:06:10.514956  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem --> /usr/share/ca-certificates/121369.pem (1338 bytes)
	I1119 23:06:10.547594  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /usr/share/ca-certificates/1213692.pem (1708 bytes)
	I1119 23:06:10.595613  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 23:06:10.619484  142733 ssh_runner.go:195] Run: openssl version
	I1119 23:06:10.626921  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121369.pem && ln -fs /usr/share/ca-certificates/121369.pem /etc/ssl/certs/121369.pem"
	I1119 23:06:10.641703  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121369.pem
	I1119 23:06:10.647634  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:54 /usr/share/ca-certificates/121369.pem
	I1119 23:06:10.647703  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121369.pem
	I1119 23:06:10.655724  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/121369.pem /etc/ssl/certs/51391683.0"
	I1119 23:06:10.670575  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1213692.pem && ln -fs /usr/share/ca-certificates/1213692.pem /etc/ssl/certs/1213692.pem"
	I1119 23:06:10.684630  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1213692.pem
	I1119 23:06:10.690618  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:54 /usr/share/ca-certificates/1213692.pem
	I1119 23:06:10.690694  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1213692.pem
	I1119 23:06:10.698531  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1213692.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 23:06:10.713731  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 23:06:10.729275  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:06:10.735204  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:06:10.735297  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:06:10.744718  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 23:06:10.760092  142733 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 23:06:10.765798  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 23:06:10.773791  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 23:06:10.781675  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 23:06:10.789835  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 23:06:10.797921  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 23:06:10.806330  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 23:06:10.814663  142733 kubeadm.go:401] StartCluster: {Name:ha-487903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Clust
erName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 23:06:10.814784  142733 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 23:06:10.814836  142733 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 23:06:10.862721  142733 cri.go:89] found id: ""
	I1119 23:06:10.862820  142733 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 23:06:10.906379  142733 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 23:06:10.906398  142733 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 23:06:10.906444  142733 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 23:06:10.937932  142733 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 23:06:10.938371  142733 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-487903" does not appear in /home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 23:06:10.938511  142733 kubeconfig.go:62] /home/jenkins/minikube-integration/21918-117497/kubeconfig needs updating (will repair): [kubeconfig missing "ha-487903" cluster setting kubeconfig missing "ha-487903" context setting]
	I1119 23:06:10.938761  142733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/kubeconfig: {Name:mk079b5c589536bd895a38d6eaf0adbccf891fc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:06:10.939284  142733 kapi.go:59] client config for ha-487903: &rest.Config{Host:"https://192.168.39.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key", CAFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1119 23:06:10.939703  142733 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1119 23:06:10.939720  142733 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1119 23:06:10.939727  142733 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1119 23:06:10.939732  142733 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1119 23:06:10.939737  142733 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1119 23:06:10.939800  142733 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1119 23:06:10.940217  142733 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 23:06:10.970469  142733 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.15
	I1119 23:06:10.970501  142733 kubeadm.go:602] duration metric: took 64.095819ms to restartPrimaryControlPlane
	I1119 23:06:10.970515  142733 kubeadm.go:403] duration metric: took 155.861263ms to StartCluster
	I1119 23:06:10.970538  142733 settings.go:142] acquiring lock: {Name:mk7bf46f049c1d627501587bc2954f8687f12cc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:06:10.970645  142733 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 23:06:10.971536  142733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/kubeconfig: {Name:mk079b5c589536bd895a38d6eaf0adbccf891fc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:06:10.971861  142733 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 23:06:10.971912  142733 start.go:242] waiting for startup goroutines ...
	I1119 23:06:10.971934  142733 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 23:06:10.972157  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:10.972266  142733 cache.go:107] acquiring lock: {Name:mk7073b8eca670d2c11dece9275947688dc7c859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 23:06:10.972332  142733 cache.go:115] /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1119 23:06:10.972347  142733 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 95.206ยตs
	I1119 23:06:10.972358  142733 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1119 23:06:10.972373  142733 cache.go:87] Successfully saved all images to host disk.
	I1119 23:06:10.972588  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:10.974762  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:06:10.975000  142733 out.go:179] * Enabled addons: 
	I1119 23:06:10.976397  142733 addons.go:515] duration metric: took 4.466316ms for enable addons: enabled=[]
	I1119 23:06:10.977405  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:10.977866  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:10.977902  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:10.978075  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:06:11.174757  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:06:11.174779  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:06:11.179357  142733 cache_images.go:264] succeeded pushing to: ha-487903
	I1119 23:06:11.179394  142733 start.go:247] waiting for cluster config update ...
	I1119 23:06:11.179405  142733 start.go:256] writing updated cluster config ...
	I1119 23:06:11.181383  142733 out.go:203] 
	I1119 23:06:11.182846  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:11.182976  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:06:11.184565  142733 out.go:179] * Starting "ha-487903-m02" control-plane node in "ha-487903" cluster
	I1119 23:06:11.185697  142733 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:06:11.185715  142733 cache.go:65] Caching tarball of preloaded images
	I1119 23:06:11.185830  142733 preload.go:238] Found /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 23:06:11.185845  142733 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 23:06:11.185991  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:06:11.186234  142733 start.go:360] acquireMachinesLock for ha-487903-m02: {Name:mk31ae761a53f3900bcf08a8c89eb1fc69e23fe3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1119 23:06:11.186285  142733 start.go:364] duration metric: took 28.134ยตs to acquireMachinesLock for "ha-487903-m02"
	I1119 23:06:11.186301  142733 start.go:96] Skipping create...Using existing machine configuration
	I1119 23:06:11.186314  142733 fix.go:54] fixHost starting: m02
	I1119 23:06:11.187948  142733 fix.go:112] recreateIfNeeded on ha-487903-m02: state=Stopped err=<nil>
	W1119 23:06:11.187969  142733 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 23:06:11.189608  142733 out.go:252] * Restarting existing kvm2 VM for "ha-487903-m02" ...
	I1119 23:06:11.189647  142733 main.go:143] libmachine: starting domain...
	I1119 23:06:11.189655  142733 main.go:143] libmachine: ensuring networks are active...
	I1119 23:06:11.190534  142733 main.go:143] libmachine: Ensuring network default is active
	I1119 23:06:11.190964  142733 main.go:143] libmachine: Ensuring network mk-ha-487903 is active
	I1119 23:06:11.191485  142733 main.go:143] libmachine: getting domain XML...
	I1119 23:06:11.192659  142733 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>ha-487903-m02</name>
	  <uuid>dcc51fc7-a2ff-40ae-988d-da36299d6bbc</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/ha-487903-m02.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:04:d5:70'/>
	      <source network='mk-ha-487903'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:9b:1d:f0'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1119 23:06:12.559560  142733 main.go:143] libmachine: waiting for domain to start...
	I1119 23:06:12.561198  142733 main.go:143] libmachine: domain is now running
	I1119 23:06:12.561220  142733 main.go:143] libmachine: waiting for IP...
	I1119 23:06:12.562111  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:12.562699  142733 main.go:143] libmachine: domain ha-487903-m02 has current primary IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:12.562715  142733 main.go:143] libmachine: found domain IP: 192.168.39.191
	I1119 23:06:12.562721  142733 main.go:143] libmachine: reserving static IP address...
	I1119 23:06:12.563203  142733 main.go:143] libmachine: found host DHCP lease matching {name: "ha-487903-m02", mac: "52:54:00:04:d5:70", ip: "192.168.39.191"} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:12.563229  142733 main.go:143] libmachine: skip adding static IP to network mk-ha-487903 - found existing host DHCP lease matching {name: "ha-487903-m02", mac: "52:54:00:04:d5:70", ip: "192.168.39.191"}
	I1119 23:06:12.563240  142733 main.go:143] libmachine: reserved static IP address 192.168.39.191 for domain ha-487903-m02
	I1119 23:06:12.563244  142733 main.go:143] libmachine: waiting for SSH...
	I1119 23:06:12.563250  142733 main.go:143] libmachine: Getting to WaitForSSH function...
	I1119 23:06:12.566254  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:12.566903  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:12.566943  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:12.567198  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:12.567490  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 23:06:12.567510  142733 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1119 23:06:15.660251  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I1119 23:06:21.740210  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I1119 23:06:24.742545  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: connection refused
	I1119 23:06:27.848690  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:06:27.852119  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:27.852581  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:27.852609  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:27.852840  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:06:27.853068  142733 machine.go:94] provisionDockerMachine start ...
	I1119 23:06:27.855169  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:27.855519  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:27.855541  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:27.855673  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:27.855857  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 23:06:27.855866  142733 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 23:06:27.961777  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1119 23:06:27.961813  142733 buildroot.go:166] provisioning hostname "ha-487903-m02"
	I1119 23:06:27.964686  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:27.965144  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:27.965168  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:27.965332  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:27.965514  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 23:06:27.965525  142733 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-487903-m02 && echo "ha-487903-m02" | sudo tee /etc/hostname
	I1119 23:06:28.090321  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-487903-m02
	
	I1119 23:06:28.093353  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.093734  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:28.093771  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.093968  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:28.094236  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 23:06:28.094259  142733 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-487903-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-487903-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-487903-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 23:06:28.210348  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:06:28.210378  142733 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21918-117497/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-117497/.minikube}
	I1119 23:06:28.210394  142733 buildroot.go:174] setting up certificates
	I1119 23:06:28.210406  142733 provision.go:84] configureAuth start
	I1119 23:06:28.213280  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.213787  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:28.213819  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.216188  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.216513  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:28.216537  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.216650  142733 provision.go:143] copyHostCerts
	I1119 23:06:28.216681  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:06:28.216719  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem, removing ...
	I1119 23:06:28.216731  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:06:28.216806  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem (1078 bytes)
	I1119 23:06:28.216924  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:06:28.216954  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem, removing ...
	I1119 23:06:28.216962  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:06:28.217011  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem (1123 bytes)
	I1119 23:06:28.217078  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:06:28.217105  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem, removing ...
	I1119 23:06:28.217114  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:06:28.217151  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem (1675 bytes)
	I1119 23:06:28.217219  142733 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem org=jenkins.ha-487903-m02 san=[127.0.0.1 192.168.39.191 ha-487903-m02 localhost minikube]
	I1119 23:06:28.306411  142733 provision.go:177] copyRemoteCerts
	I1119 23:06:28.306488  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 23:06:28.309423  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.309811  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:28.309838  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.309994  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:06:28.397995  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1119 23:06:28.398093  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 23:06:28.433333  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1119 23:06:28.433422  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1119 23:06:28.465202  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1119 23:06:28.465281  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 23:06:28.497619  142733 provision.go:87] duration metric: took 287.196846ms to configureAuth
	I1119 23:06:28.497657  142733 buildroot.go:189] setting minikube options for container-runtime
	I1119 23:06:28.497961  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:28.500692  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.501143  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:28.501166  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.501348  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:28.501530  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 23:06:28.501542  142733 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 23:06:28.756160  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 23:06:28.756188  142733 machine.go:97] duration metric: took 903.106737ms to provisionDockerMachine
	I1119 23:06:28.756199  142733 start.go:293] postStartSetup for "ha-487903-m02" (driver="kvm2")
	I1119 23:06:28.756221  142733 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 23:06:28.756309  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 23:06:28.759030  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.759384  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:28.759410  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.759547  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:06:28.845331  142733 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 23:06:28.850863  142733 info.go:137] Remote host: Buildroot 2025.02
	I1119 23:06:28.850908  142733 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/addons for local assets ...
	I1119 23:06:28.850968  142733 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/files for local assets ...
	I1119 23:06:28.851044  142733 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> 1213692.pem in /etc/ssl/certs
	I1119 23:06:28.851055  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /etc/ssl/certs/1213692.pem
	I1119 23:06:28.851135  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 23:06:28.863679  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:06:28.895369  142733 start.go:296] duration metric: took 139.152116ms for postStartSetup
	I1119 23:06:28.895468  142733 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1119 23:06:28.898332  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.898765  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:28.898790  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.898999  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:06:28.985599  142733 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I1119 23:06:28.985693  142733 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1119 23:06:29.047204  142733 fix.go:56] duration metric: took 17.860883759s for fixHost
	I1119 23:06:29.050226  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:29.050744  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:29.050767  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:29.050981  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:29.051235  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 23:06:29.051247  142733 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1119 23:06:29.170064  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763593589.134247097
	
	I1119 23:06:29.170097  142733 fix.go:216] guest clock: 1763593589.134247097
	I1119 23:06:29.170109  142733 fix.go:229] Guest: 2025-11-19 23:06:29.134247097 +0000 UTC Remote: 2025-11-19 23:06:29.047235815 +0000 UTC m=+36.391479959 (delta=87.011282ms)
	I1119 23:06:29.170136  142733 fix.go:200] guest clock delta is within tolerance: 87.011282ms
	I1119 23:06:29.170145  142733 start.go:83] releasing machines lock for "ha-487903-m02", held for 17.983849826s
	I1119 23:06:29.173173  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:29.173648  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:29.173674  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:29.175909  142733 out.go:179] * Found network options:
	I1119 23:06:29.177568  142733 out.go:179]   - NO_PROXY=192.168.39.15
	W1119 23:06:29.178760  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:06:29.179292  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	I1119 23:06:29.179397  142733 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 23:06:29.179416  142733 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 23:06:29.182546  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:29.182562  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:29.183004  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:29.183038  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:29.183140  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:29.183185  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:29.183194  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:06:29.183426  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:06:29.429918  142733 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 23:06:29.437545  142733 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 23:06:29.437605  142733 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 23:06:29.459815  142733 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 23:06:29.459846  142733 start.go:496] detecting cgroup driver to use...
	I1119 23:06:29.459981  142733 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 23:06:29.484636  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 23:06:29.506049  142733 docker.go:218] disabling cri-docker service (if available) ...
	I1119 23:06:29.506131  142733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 23:06:29.529159  142733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 23:06:29.547692  142733 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 23:06:29.709216  142733 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 23:06:29.933205  142733 docker.go:234] disabling docker service ...
	I1119 23:06:29.933271  142733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 23:06:29.951748  142733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 23:06:29.967973  142733 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 23:06:30.147148  142733 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 23:06:30.300004  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 23:06:30.316471  142733 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 23:06:30.341695  142733 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 23:06:30.341768  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:30.355246  142733 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 23:06:30.355313  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:30.368901  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:30.381931  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:30.395421  142733 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 23:06:30.410190  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:30.424532  142733 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:30.447910  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:30.462079  142733 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 23:06:30.473475  142733 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1119 23:06:30.473555  142733 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1119 23:06:30.495385  142733 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 23:06:30.507744  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:06:30.650555  142733 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 23:06:30.778126  142733 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 23:06:30.778224  142733 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 23:06:30.784440  142733 start.go:564] Will wait 60s for crictl version
	I1119 23:06:30.784509  142733 ssh_runner.go:195] Run: which crictl
	I1119 23:06:30.789036  142733 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1119 23:06:30.834259  142733 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1119 23:06:30.834368  142733 ssh_runner.go:195] Run: crio --version
	I1119 23:06:30.866387  142733 ssh_runner.go:195] Run: crio --version
	I1119 23:06:30.901524  142733 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1119 23:06:30.902829  142733 out.go:179]   - env NO_PROXY=192.168.39.15
	I1119 23:06:30.906521  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:30.906929  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:30.906948  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:30.907113  142733 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1119 23:06:30.912354  142733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:06:30.929641  142733 mustload.go:66] Loading cluster: ha-487903
	I1119 23:06:30.929929  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:30.931609  142733 host.go:66] Checking if "ha-487903" exists ...
	I1119 23:06:30.931865  142733 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903 for IP: 192.168.39.191
	I1119 23:06:30.931896  142733 certs.go:195] generating shared ca certs ...
	I1119 23:06:30.931917  142733 certs.go:227] acquiring lock for ca certs: {Name:mk7fd1f1adfef6333505c39c1a982465562c820e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:06:30.932057  142733 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key
	I1119 23:06:30.932118  142733 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key
	I1119 23:06:30.932128  142733 certs.go:257] generating profile certs ...
	I1119 23:06:30.932195  142733 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key
	I1119 23:06:30.932244  142733 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key.4e640f1f
	I1119 23:06:30.932279  142733 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key
	I1119 23:06:30.932291  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1119 23:06:30.932302  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1119 23:06:30.932313  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1119 23:06:30.932326  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1119 23:06:30.932335  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1119 23:06:30.932348  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1119 23:06:30.932360  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1119 23:06:30.932370  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1119 23:06:30.932416  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem (1338 bytes)
	W1119 23:06:30.932442  142733 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369_empty.pem, impossibly tiny 0 bytes
	I1119 23:06:30.932451  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 23:06:30.932473  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem (1078 bytes)
	I1119 23:06:30.932493  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem (1123 bytes)
	I1119 23:06:30.932514  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem (1675 bytes)
	I1119 23:06:30.932559  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:06:30.932585  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /usr/share/ca-certificates/1213692.pem
	I1119 23:06:30.932599  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:06:30.932609  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem -> /usr/share/ca-certificates/121369.pem
	I1119 23:06:30.934682  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:30.935112  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:30.935137  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:30.935281  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:06:31.009328  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1119 23:06:31.016386  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1119 23:06:31.030245  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1119 23:06:31.035820  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1119 23:06:31.049236  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1119 23:06:31.054346  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1119 23:06:31.067895  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1119 23:06:31.073323  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1119 23:06:31.087209  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1119 23:06:31.092290  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1119 23:06:31.105480  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1119 23:06:31.110774  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1119 23:06:31.124311  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 23:06:31.157146  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 23:06:31.188112  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 23:06:31.219707  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 23:06:31.252776  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 23:06:31.288520  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 23:06:31.324027  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 23:06:31.356576  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 23:06:31.388386  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /usr/share/ca-certificates/1213692.pem (1708 bytes)
	I1119 23:06:31.418690  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 23:06:31.450428  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem --> /usr/share/ca-certificates/121369.pem (1338 bytes)
	I1119 23:06:31.480971  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1119 23:06:31.502673  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1119 23:06:31.525149  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1119 23:06:31.547365  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1119 23:06:31.569864  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1119 23:06:31.592406  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1119 23:06:31.614323  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1119 23:06:31.638212  142733 ssh_runner.go:195] Run: openssl version
	I1119 23:06:31.645456  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 23:06:31.659620  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:06:31.665114  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:06:31.665178  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:06:31.672451  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 23:06:31.686443  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121369.pem && ln -fs /usr/share/ca-certificates/121369.pem /etc/ssl/certs/121369.pem"
	I1119 23:06:31.700888  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121369.pem
	I1119 23:06:31.706357  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:54 /usr/share/ca-certificates/121369.pem
	I1119 23:06:31.706409  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121369.pem
	I1119 23:06:31.713959  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/121369.pem /etc/ssl/certs/51391683.0"
	I1119 23:06:31.727492  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1213692.pem && ln -fs /usr/share/ca-certificates/1213692.pem /etc/ssl/certs/1213692.pem"
	I1119 23:06:31.741862  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1213692.pem
	I1119 23:06:31.747549  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:54 /usr/share/ca-certificates/1213692.pem
	I1119 23:06:31.747622  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1213692.pem
	I1119 23:06:31.755354  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1213692.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 23:06:31.769594  142733 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 23:06:31.775132  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 23:06:31.783159  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 23:06:31.790685  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 23:06:31.798517  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 23:06:31.806212  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 23:06:31.814046  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 23:06:31.822145  142733 kubeadm.go:935] updating node {m02 192.168.39.191 8443 v1.34.1 crio true true} ...
	I1119 23:06:31.822259  142733 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-487903-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.191
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 23:06:31.822290  142733 kube-vip.go:115] generating kube-vip config ...
	I1119 23:06:31.822339  142733 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1119 23:06:31.849048  142733 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1119 23:06:31.849130  142733 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1119 23:06:31.849212  142733 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 23:06:31.862438  142733 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 23:06:31.862506  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1119 23:06:31.874865  142733 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1119 23:06:31.897430  142733 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 23:06:31.918586  142733 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1119 23:06:31.939534  142733 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1119 23:06:31.943930  142733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:06:31.958780  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:06:32.100156  142733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:06:32.133415  142733 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 23:06:32.133754  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:32.133847  142733 cache.go:107] acquiring lock: {Name:mk7073b8eca670d2c11dece9275947688dc7c859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 23:06:32.133936  142733 cache.go:115] /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1119 23:06:32.133949  142733 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 113.063ยตs
	I1119 23:06:32.133960  142733 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1119 23:06:32.133970  142733 cache.go:87] Successfully saved all images to host disk.
	I1119 23:06:32.134176  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:32.135284  142733 out.go:179] * Verifying Kubernetes components...
	I1119 23:06:32.136324  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:06:32.136777  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:06:32.139351  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:32.139927  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:32.139963  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:32.140169  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:06:32.321166  142733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:06:32.321693  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:06:32.321714  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:06:32.323895  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:06:32.326607  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:32.327119  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:32.327146  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:32.327377  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:06:32.352387  142733 kapi.go:59] client config for ha-487903: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key", CAFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1119 23:06:32.352506  142733 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.15:8443
	I1119 23:06:32.352953  142733 node_ready.go:35] waiting up to 6m0s for node "ha-487903-m02" to be "Ready" ...
	I1119 23:06:32.500722  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:06:32.500745  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:06:32.503448  142733 cache_images.go:264] succeeded pushing to: ha-487903 ha-487903-m02
	I1119 23:06:34.010161  142733 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02"
	I1119 23:06:41.592816  142733 node_ready.go:49] node "ha-487903-m02" is "Ready"
	I1119 23:06:41.592846  142733 node_ready.go:38] duration metric: took 9.239866557s for node "ha-487903-m02" to be "Ready" ...
	I1119 23:06:41.592864  142733 api_server.go:52] waiting for apiserver process to appear ...
	I1119 23:06:41.592953  142733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 23:06:42.093838  142733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 23:06:42.118500  142733 api_server.go:72] duration metric: took 9.985021825s to wait for apiserver process to appear ...
	I1119 23:06:42.118528  142733 api_server.go:88] waiting for apiserver healthz status ...
	I1119 23:06:42.118547  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:06:42.123892  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:06:42.123926  142733 api_server.go:103] status: https://192.168.39.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:06:42.619715  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:06:42.637068  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:06:42.637097  142733 api_server.go:103] status: https://192.168.39.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:06:43.118897  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:06:43.133996  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:06:43.134034  142733 api_server.go:103] status: https://192.168.39.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:06:43.618675  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:06:43.661252  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:06:43.661293  142733 api_server.go:103] status: https://192.168.39.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:06:44.118914  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:06:44.149362  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:06:44.149396  142733 api_server.go:103] status: https://192.168.39.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:06:44.618983  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:06:44.670809  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:06:44.670848  142733 api_server.go:103] status: https://192.168.39.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:06:45.119579  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:06:45.130478  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:06:45.130510  142733 api_server.go:103] status: https://192.168.39.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:06:45.619260  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:06:45.628758  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 200:
	ok
	I1119 23:06:45.631891  142733 api_server.go:141] control plane version: v1.34.1
	I1119 23:06:45.631928  142733 api_server.go:131] duration metric: took 3.513391545s to wait for apiserver health ...
	I1119 23:06:45.631939  142733 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 23:06:45.660854  142733 system_pods.go:59] 26 kube-system pods found
	I1119 23:06:45.660934  142733 system_pods.go:61] "coredns-66bc5c9577-5gt2t" [4a5bca7b-6369-4f70-b467-829bf0c07711] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:06:45.660946  142733 system_pods.go:61] "coredns-66bc5c9577-zjxkb" [9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:06:45.660955  142733 system_pods.go:61] "etcd-ha-487903" [c5b86825-5332-4f13-9566-5427deb2b911] Running
	I1119 23:06:45.660965  142733 system_pods.go:61] "etcd-ha-487903-m02" [a83259c3-4655-44c3-99ac-0f3a1c270ce1] Running
	I1119 23:06:45.660971  142733 system_pods.go:61] "etcd-ha-487903-m03" [1569e0f7-59af-4ee0-8f74-bcc25085147d] Running
	I1119 23:06:45.660978  142733 system_pods.go:61] "kindnet-9zx8x" [afdae080-5eff-4da4-8aff-726047c47eee] Running
	I1119 23:06:45.660983  142733 system_pods.go:61] "kindnet-kslhw" [4b82f473-b098-45a9-adf5-8c239353c3cd] Running
	I1119 23:06:45.660988  142733 system_pods.go:61] "kindnet-p9nqh" [1dd7683b-c7e7-487c-904a-506a24f833d8] Running
	I1119 23:06:45.660995  142733 system_pods.go:61] "kindnet-s9k2l" [98a511b0-fa67-4874-8013-e8a4a0adf5f1] Running
	I1119 23:06:45.661002  142733 system_pods.go:61] "kube-apiserver-ha-487903" [ab2ffc5e-d01f-4f0d-840e-f029adf3e0c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 23:06:45.661009  142733 system_pods.go:61] "kube-apiserver-ha-487903-m02" [cf63e467-a8e4-4ef3-910a-e35e6e75afdc] Running
	I1119 23:06:45.661014  142733 system_pods.go:61] "kube-apiserver-ha-487903-m03" [a121515e-29eb-42b5-b153-f7a0046ac5c2] Running
	I1119 23:06:45.661025  142733 system_pods.go:61] "kube-controller-manager-ha-487903" [8c9a78c7-ca0e-42cd-90e1-c4c6496de2d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 23:06:45.661033  142733 system_pods.go:61] "kube-controller-manager-ha-487903-m02" [bb173b4b-c202-408f-941e-f0af81d49115] Running
	I1119 23:06:45.661038  142733 system_pods.go:61] "kube-controller-manager-ha-487903-m03" [065ea88f-3d45-429d-8e50-ca0aa1a73bcf] Running
	I1119 23:06:45.661043  142733 system_pods.go:61] "kube-proxy-77wjf" [cff3e933-3100-44ce-9215-9db308c328d9] Running
	I1119 23:06:45.661047  142733 system_pods.go:61] "kube-proxy-fk7mh" [8743ca8a-c5e5-4da6-a983-a6191d2a852a] Running
	I1119 23:06:45.661051  142733 system_pods.go:61] "kube-proxy-tkx9r" [93e33992-972a-4c46-8ecb-143a0746d256] Running
	I1119 23:06:45.661062  142733 system_pods.go:61] "kube-proxy-zxtk6" [a2e2aa8e-2dc5-4233-a89a-da39ab61fd72] Running
	I1119 23:06:45.661066  142733 system_pods.go:61] "kube-scheduler-ha-487903" [9adc6023-3fda-4823-9f02-f9ec96db2287] Running
	I1119 23:06:45.661071  142733 system_pods.go:61] "kube-scheduler-ha-487903-m02" [45efe4be-fc82-4bc3-9c0a-a731153d7a72] Running
	I1119 23:06:45.661075  142733 system_pods.go:61] "kube-scheduler-ha-487903-m03" [bdb98728-6527-4ad5-9a13-3b7d2aba1643] Running
	I1119 23:06:45.661080  142733 system_pods.go:61] "kube-vip-ha-487903" [38307c4c-3c7f-4880-a86f-c8066b3da90a] Running
	I1119 23:06:45.661084  142733 system_pods.go:61] "kube-vip-ha-487903-m02" [a4dc6a51-edac-4815-8723-d8fb6f7b6935] Running
	I1119 23:06:45.661091  142733 system_pods.go:61] "kube-vip-ha-487903-m03" [45a6021a-3c71-47ce-9553-d75c327c14f3] Running
	I1119 23:06:45.661095  142733 system_pods.go:61] "storage-provisioner" [2f8e8800-093c-4c68-ac9c-300049543509] Running
	I1119 23:06:45.661103  142733 system_pods.go:74] duration metric: took 29.156984ms to wait for pod list to return data ...
	I1119 23:06:45.661123  142733 default_sa.go:34] waiting for default service account to be created ...
	I1119 23:06:45.681470  142733 default_sa.go:45] found service account: "default"
	I1119 23:06:45.681503  142733 default_sa.go:55] duration metric: took 20.368831ms for default service account to be created ...
	I1119 23:06:45.681516  142733 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 23:06:45.756049  142733 system_pods.go:86] 26 kube-system pods found
	I1119 23:06:45.756097  142733 system_pods.go:89] "coredns-66bc5c9577-5gt2t" [4a5bca7b-6369-4f70-b467-829bf0c07711] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:06:45.756115  142733 system_pods.go:89] "coredns-66bc5c9577-zjxkb" [9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:06:45.756124  142733 system_pods.go:89] "etcd-ha-487903" [c5b86825-5332-4f13-9566-5427deb2b911] Running
	I1119 23:06:45.756130  142733 system_pods.go:89] "etcd-ha-487903-m02" [a83259c3-4655-44c3-99ac-0f3a1c270ce1] Running
	I1119 23:06:45.756141  142733 system_pods.go:89] "etcd-ha-487903-m03" [1569e0f7-59af-4ee0-8f74-bcc25085147d] Running
	I1119 23:06:45.756153  142733 system_pods.go:89] "kindnet-9zx8x" [afdae080-5eff-4da4-8aff-726047c47eee] Running
	I1119 23:06:45.756158  142733 system_pods.go:89] "kindnet-kslhw" [4b82f473-b098-45a9-adf5-8c239353c3cd] Running
	I1119 23:06:45.756163  142733 system_pods.go:89] "kindnet-p9nqh" [1dd7683b-c7e7-487c-904a-506a24f833d8] Running
	I1119 23:06:45.756168  142733 system_pods.go:89] "kindnet-s9k2l" [98a511b0-fa67-4874-8013-e8a4a0adf5f1] Running
	I1119 23:06:45.756180  142733 system_pods.go:89] "kube-apiserver-ha-487903" [ab2ffc5e-d01f-4f0d-840e-f029adf3e0c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 23:06:45.756187  142733 system_pods.go:89] "kube-apiserver-ha-487903-m02" [cf63e467-a8e4-4ef3-910a-e35e6e75afdc] Running
	I1119 23:06:45.756193  142733 system_pods.go:89] "kube-apiserver-ha-487903-m03" [a121515e-29eb-42b5-b153-f7a0046ac5c2] Running
	I1119 23:06:45.756214  142733 system_pods.go:89] "kube-controller-manager-ha-487903" [8c9a78c7-ca0e-42cd-90e1-c4c6496de2d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 23:06:45.756220  142733 system_pods.go:89] "kube-controller-manager-ha-487903-m02" [bb173b4b-c202-408f-941e-f0af81d49115] Running
	I1119 23:06:45.756227  142733 system_pods.go:89] "kube-controller-manager-ha-487903-m03" [065ea88f-3d45-429d-8e50-ca0aa1a73bcf] Running
	I1119 23:06:45.756232  142733 system_pods.go:89] "kube-proxy-77wjf" [cff3e933-3100-44ce-9215-9db308c328d9] Running
	I1119 23:06:45.756242  142733 system_pods.go:89] "kube-proxy-fk7mh" [8743ca8a-c5e5-4da6-a983-a6191d2a852a] Running
	I1119 23:06:45.756248  142733 system_pods.go:89] "kube-proxy-tkx9r" [93e33992-972a-4c46-8ecb-143a0746d256] Running
	I1119 23:06:45.756253  142733 system_pods.go:89] "kube-proxy-zxtk6" [a2e2aa8e-2dc5-4233-a89a-da39ab61fd72] Running
	I1119 23:06:45.756258  142733 system_pods.go:89] "kube-scheduler-ha-487903" [9adc6023-3fda-4823-9f02-f9ec96db2287] Running
	I1119 23:06:45.756267  142733 system_pods.go:89] "kube-scheduler-ha-487903-m02" [45efe4be-fc82-4bc3-9c0a-a731153d7a72] Running
	I1119 23:06:45.756276  142733 system_pods.go:89] "kube-scheduler-ha-487903-m03" [bdb98728-6527-4ad5-9a13-3b7d2aba1643] Running
	I1119 23:06:45.756281  142733 system_pods.go:89] "kube-vip-ha-487903" [38307c4c-3c7f-4880-a86f-c8066b3da90a] Running
	I1119 23:06:45.756286  142733 system_pods.go:89] "kube-vip-ha-487903-m02" [a4dc6a51-edac-4815-8723-d8fb6f7b6935] Running
	I1119 23:06:45.756290  142733 system_pods.go:89] "kube-vip-ha-487903-m03" [45a6021a-3c71-47ce-9553-d75c327c14f3] Running
	I1119 23:06:45.756299  142733 system_pods.go:89] "storage-provisioner" [2f8e8800-093c-4c68-ac9c-300049543509] Running
	I1119 23:06:45.756310  142733 system_pods.go:126] duration metric: took 74.786009ms to wait for k8s-apps to be running ...
	I1119 23:06:45.756320  142733 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 23:06:45.756377  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 23:06:45.804032  142733 system_svc.go:56] duration metric: took 47.697905ms WaitForService to wait for kubelet
	I1119 23:06:45.804075  142733 kubeadm.go:587] duration metric: took 13.670605736s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 23:06:45.804108  142733 node_conditions.go:102] verifying NodePressure condition ...
	I1119 23:06:45.809115  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:06:45.809156  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:06:45.809181  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:06:45.809187  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:06:45.809193  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:06:45.809200  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:06:45.809208  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:06:45.809216  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:06:45.809222  142733 node_conditions.go:105] duration metric: took 5.108401ms to run NodePressure ...
	I1119 23:06:45.809243  142733 start.go:242] waiting for startup goroutines ...
	I1119 23:06:45.809289  142733 start.go:256] writing updated cluster config ...
	I1119 23:06:45.811415  142733 out.go:203] 
	I1119 23:06:45.813102  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:45.813254  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:06:45.814787  142733 out.go:179] * Starting "ha-487903-m03" control-plane node in "ha-487903" cluster
	I1119 23:06:45.815937  142733 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:06:45.815964  142733 cache.go:65] Caching tarball of preloaded images
	I1119 23:06:45.816100  142733 preload.go:238] Found /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 23:06:45.816115  142733 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 23:06:45.816268  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:06:45.816543  142733 start.go:360] acquireMachinesLock for ha-487903-m03: {Name:mk31ae761a53f3900bcf08a8c89eb1fc69e23fe3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1119 23:06:45.816612  142733 start.go:364] duration metric: took 39.245ยตs to acquireMachinesLock for "ha-487903-m03"
	I1119 23:06:45.816630  142733 start.go:96] Skipping create...Using existing machine configuration
	I1119 23:06:45.816642  142733 fix.go:54] fixHost starting: m03
	I1119 23:06:45.818510  142733 fix.go:112] recreateIfNeeded on ha-487903-m03: state=Stopped err=<nil>
	W1119 23:06:45.818540  142733 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 23:06:45.819904  142733 out.go:252] * Restarting existing kvm2 VM for "ha-487903-m03" ...
	I1119 23:06:45.819950  142733 main.go:143] libmachine: starting domain...
	I1119 23:06:45.819961  142733 main.go:143] libmachine: ensuring networks are active...
	I1119 23:06:45.820828  142733 main.go:143] libmachine: Ensuring network default is active
	I1119 23:06:45.821278  142733 main.go:143] libmachine: Ensuring network mk-ha-487903 is active
	I1119 23:06:45.821805  142733 main.go:143] libmachine: getting domain XML...
	I1119 23:06:45.823105  142733 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>ha-487903-m03</name>
	  <uuid>e9ddbb3b-f8b5-4cd4-8c27-cb1452f23fd2</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/ha-487903-m03.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:b3:68:3d'/>
	      <source network='mk-ha-487903'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:7a:90:da'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1119 23:06:47.444391  142733 main.go:143] libmachine: waiting for domain to start...
	I1119 23:06:47.445887  142733 main.go:143] libmachine: domain is now running
	I1119 23:06:47.445908  142733 main.go:143] libmachine: waiting for IP...
	I1119 23:06:47.446706  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:06:47.447357  142733 main.go:143] libmachine: domain ha-487903-m03 has current primary IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:06:47.447380  142733 main.go:143] libmachine: found domain IP: 192.168.39.160
	I1119 23:06:47.447388  142733 main.go:143] libmachine: reserving static IP address...
	I1119 23:06:47.447950  142733 main.go:143] libmachine: found host DHCP lease matching {name: "ha-487903-m03", mac: "52:54:00:b3:68:3d", ip: "192.168.39.160"} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:50:12 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:06:47.447985  142733 main.go:143] libmachine: skip adding static IP to network mk-ha-487903 - found existing host DHCP lease matching {name: "ha-487903-m03", mac: "52:54:00:b3:68:3d", ip: "192.168.39.160"}
	I1119 23:06:47.447998  142733 main.go:143] libmachine: reserved static IP address 192.168.39.160 for domain ha-487903-m03
	I1119 23:06:47.448003  142733 main.go:143] libmachine: waiting for SSH...
	I1119 23:06:47.448010  142733 main.go:143] libmachine: Getting to WaitForSSH function...
	I1119 23:06:47.450788  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:06:47.451222  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:50:12 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:06:47.451253  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:06:47.451441  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:47.451661  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1119 23:06:47.451673  142733 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1119 23:06:50.540171  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.160:22: connect: no route to host
	I1119 23:06:56.620202  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.160:22: connect: no route to host
	I1119 23:06:59.621964  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.160:22: connect: connection refused
	I1119 23:07:02.732773  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:07:02.736628  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:02.737046  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:02.737076  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:02.737371  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:07:02.737615  142733 machine.go:94] provisionDockerMachine start ...
	I1119 23:07:02.740024  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:02.740530  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:02.740555  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:02.740752  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:02.741040  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1119 23:07:02.741054  142733 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 23:07:02.852322  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1119 23:07:02.852355  142733 buildroot.go:166] provisioning hostname "ha-487903-m03"
	I1119 23:07:02.855519  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:02.856083  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:02.856112  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:02.856309  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:02.856556  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1119 23:07:02.856572  142733 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-487903-m03 && echo "ha-487903-m03" | sudo tee /etc/hostname
	I1119 23:07:02.990322  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-487903-m03
	
	I1119 23:07:02.993714  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:02.994202  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:02.994233  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:02.994405  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:02.994627  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1119 23:07:02.994651  142733 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-487903-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-487903-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-487903-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 23:07:03.118189  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:07:03.118221  142733 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21918-117497/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-117497/.minikube}
	I1119 23:07:03.118237  142733 buildroot.go:174] setting up certificates
	I1119 23:07:03.118248  142733 provision.go:84] configureAuth start
	I1119 23:07:03.121128  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.121630  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:03.121656  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.124221  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.124569  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:03.124592  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.124715  142733 provision.go:143] copyHostCerts
	I1119 23:07:03.124748  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:07:03.124787  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem, removing ...
	I1119 23:07:03.124797  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:07:03.124892  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem (1675 bytes)
	I1119 23:07:03.125005  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:07:03.125037  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem, removing ...
	I1119 23:07:03.125047  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:07:03.125090  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem (1078 bytes)
	I1119 23:07:03.125160  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:07:03.125188  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem, removing ...
	I1119 23:07:03.125198  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:07:03.125238  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem (1123 bytes)
	I1119 23:07:03.125306  142733 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem org=jenkins.ha-487903-m03 san=[127.0.0.1 192.168.39.160 ha-487903-m03 localhost minikube]
	I1119 23:07:03.484960  142733 provision.go:177] copyRemoteCerts
	I1119 23:07:03.485022  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 23:07:03.487560  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.488008  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:03.488032  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.488178  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/id_rsa Username:docker}
	I1119 23:07:03.574034  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1119 23:07:03.574117  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 23:07:03.604129  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1119 23:07:03.604216  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1119 23:07:03.635162  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1119 23:07:03.635235  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 23:07:03.668358  142733 provision.go:87] duration metric: took 550.091154ms to configureAuth
	I1119 23:07:03.668387  142733 buildroot.go:189] setting minikube options for container-runtime
	I1119 23:07:03.668643  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:03.671745  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.672214  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:03.672242  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.672395  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:03.672584  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1119 23:07:03.672599  142733 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 23:07:03.950762  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 23:07:03.950792  142733 machine.go:97] duration metric: took 1.213162195s to provisionDockerMachine
	I1119 23:07:03.950807  142733 start.go:293] postStartSetup for "ha-487903-m03" (driver="kvm2")
	I1119 23:07:03.950821  142733 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 23:07:03.950908  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 23:07:03.954010  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.954449  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:03.954472  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.954609  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/id_rsa Username:docker}
	I1119 23:07:04.043080  142733 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 23:07:04.048534  142733 info.go:137] Remote host: Buildroot 2025.02
	I1119 23:07:04.048567  142733 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/addons for local assets ...
	I1119 23:07:04.048645  142733 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/files for local assets ...
	I1119 23:07:04.048729  142733 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> 1213692.pem in /etc/ssl/certs
	I1119 23:07:04.048741  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /etc/ssl/certs/1213692.pem
	I1119 23:07:04.048850  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 23:07:04.062005  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:07:04.095206  142733 start.go:296] duration metric: took 144.382125ms for postStartSetup
	I1119 23:07:04.095293  142733 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1119 23:07:04.097927  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.098314  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:04.098337  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.098469  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/id_rsa Username:docker}
	I1119 23:07:04.187620  142733 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I1119 23:07:04.187695  142733 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1119 23:07:04.250288  142733 fix.go:56] duration metric: took 18.433638518s for fixHost
	I1119 23:07:04.253813  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.254395  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:04.254423  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.254650  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:04.254923  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1119 23:07:04.254938  142733 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1119 23:07:04.407951  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763593624.369608325
	
	I1119 23:07:04.407981  142733 fix.go:216] guest clock: 1763593624.369608325
	I1119 23:07:04.407992  142733 fix.go:229] Guest: 2025-11-19 23:07:04.369608325 +0000 UTC Remote: 2025-11-19 23:07:04.250316644 +0000 UTC m=+71.594560791 (delta=119.291681ms)
	I1119 23:07:04.408018  142733 fix.go:200] guest clock delta is within tolerance: 119.291681ms
	I1119 23:07:04.408026  142733 start.go:83] releasing machines lock for "ha-487903-m03", held for 18.591403498s
	I1119 23:07:04.411093  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.411490  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:04.411518  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.413431  142733 out.go:179] * Found network options:
	I1119 23:07:04.414774  142733 out.go:179]   - NO_PROXY=192.168.39.15,192.168.39.191
	W1119 23:07:04.415854  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:07:04.415891  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:07:04.416317  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:07:04.416348  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	I1119 23:07:04.416422  142733 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 23:07:04.416436  142733 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 23:07:04.419695  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.419745  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.420204  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:04.420228  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.420310  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:04.420352  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.420397  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/id_rsa Username:docker}
	I1119 23:07:04.420643  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/id_rsa Username:docker}
	I1119 23:07:04.657635  142733 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 23:07:04.665293  142733 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 23:07:04.665372  142733 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 23:07:04.689208  142733 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 23:07:04.689244  142733 start.go:496] detecting cgroup driver to use...
	I1119 23:07:04.689352  142733 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 23:07:04.714215  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 23:07:04.733166  142733 docker.go:218] disabling cri-docker service (if available) ...
	I1119 23:07:04.733238  142733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 23:07:04.756370  142733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 23:07:04.778280  142733 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 23:07:04.943140  142733 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 23:07:05.174139  142733 docker.go:234] disabling docker service ...
	I1119 23:07:05.174230  142733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 23:07:05.192652  142733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 23:07:05.219388  142733 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 23:07:05.383745  142733 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 23:07:05.538084  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 23:07:05.555554  142733 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 23:07:05.579503  142733 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 23:07:05.579567  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:05.593464  142733 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 23:07:05.593530  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:05.609133  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:05.624066  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:05.637817  142733 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 23:07:05.653008  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:05.666833  142733 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:05.691556  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:05.705398  142733 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 23:07:05.717404  142733 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1119 23:07:05.717480  142733 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1119 23:07:05.740569  142733 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 23:07:05.753510  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:07:05.907119  142733 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 23:07:06.048396  142733 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 23:07:06.048486  142733 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 23:07:06.055638  142733 start.go:564] Will wait 60s for crictl version
	I1119 23:07:06.055719  142733 ssh_runner.go:195] Run: which crictl
	I1119 23:07:06.061562  142733 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1119 23:07:06.110271  142733 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1119 23:07:06.110342  142733 ssh_runner.go:195] Run: crio --version
	I1119 23:07:06.146231  142733 ssh_runner.go:195] Run: crio --version
	I1119 23:07:06.178326  142733 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1119 23:07:06.179543  142733 out.go:179]   - env NO_PROXY=192.168.39.15
	I1119 23:07:06.180760  142733 out.go:179]   - env NO_PROXY=192.168.39.15,192.168.39.191
	I1119 23:07:06.184561  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:06.184934  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:06.184957  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:06.185144  142733 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1119 23:07:06.190902  142733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:07:06.207584  142733 mustload.go:66] Loading cluster: ha-487903
	I1119 23:07:06.207839  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:06.209435  142733 host.go:66] Checking if "ha-487903" exists ...
	I1119 23:07:06.209634  142733 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903 for IP: 192.168.39.160
	I1119 23:07:06.209644  142733 certs.go:195] generating shared ca certs ...
	I1119 23:07:06.209656  142733 certs.go:227] acquiring lock for ca certs: {Name:mk7fd1f1adfef6333505c39c1a982465562c820e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:07:06.209760  142733 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key
	I1119 23:07:06.209804  142733 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key
	I1119 23:07:06.209811  142733 certs.go:257] generating profile certs ...
	I1119 23:07:06.209893  142733 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key
	I1119 23:07:06.209959  142733 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key.0aa3aad5
	I1119 23:07:06.210018  142733 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key
	I1119 23:07:06.210035  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1119 23:07:06.210054  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1119 23:07:06.210067  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1119 23:07:06.210080  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1119 23:07:06.210091  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1119 23:07:06.210102  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1119 23:07:06.210114  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1119 23:07:06.210126  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1119 23:07:06.210182  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem (1338 bytes)
	W1119 23:07:06.210223  142733 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369_empty.pem, impossibly tiny 0 bytes
	I1119 23:07:06.210235  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 23:07:06.210266  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem (1078 bytes)
	I1119 23:07:06.210291  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem (1123 bytes)
	I1119 23:07:06.210312  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem (1675 bytes)
	I1119 23:07:06.210372  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:07:06.210412  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:07:06.210426  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem -> /usr/share/ca-certificates/121369.pem
	I1119 23:07:06.210444  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /usr/share/ca-certificates/1213692.pem
	I1119 23:07:06.213240  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:07:06.213640  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:07:06.213661  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:07:06.213778  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:07:06.286328  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1119 23:07:06.292502  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1119 23:07:06.306380  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1119 23:07:06.311916  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1119 23:07:06.325372  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1119 23:07:06.331268  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1119 23:07:06.346732  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1119 23:07:06.351946  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1119 23:07:06.366848  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1119 23:07:06.372483  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1119 23:07:06.389518  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1119 23:07:06.395938  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1119 23:07:06.409456  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 23:07:06.450401  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 23:07:06.486719  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 23:07:06.523798  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 23:07:06.561368  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 23:07:06.599512  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 23:07:06.634946  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 23:07:06.670031  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 23:07:06.704068  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 23:07:06.735677  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem --> /usr/share/ca-certificates/121369.pem (1338 bytes)
	I1119 23:07:06.768990  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /usr/share/ca-certificates/1213692.pem (1708 bytes)
	I1119 23:07:06.806854  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1119 23:07:06.832239  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1119 23:07:06.856375  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1119 23:07:06.879310  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1119 23:07:06.902404  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1119 23:07:06.927476  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1119 23:07:06.952223  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1119 23:07:06.974196  142733 ssh_runner.go:195] Run: openssl version
	I1119 23:07:06.981644  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 23:07:06.999412  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:07:07.005373  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:07:07.005446  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:07:07.013895  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 23:07:07.031130  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121369.pem && ln -fs /usr/share/ca-certificates/121369.pem /etc/ssl/certs/121369.pem"
	I1119 23:07:07.046043  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121369.pem
	I1119 23:07:07.051937  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:54 /usr/share/ca-certificates/121369.pem
	I1119 23:07:07.052014  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121369.pem
	I1119 23:07:07.059543  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/121369.pem /etc/ssl/certs/51391683.0"
	I1119 23:07:07.078500  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1213692.pem && ln -fs /usr/share/ca-certificates/1213692.pem /etc/ssl/certs/1213692.pem"
	I1119 23:07:07.093375  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1213692.pem
	I1119 23:07:07.099508  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:54 /usr/share/ca-certificates/1213692.pem
	I1119 23:07:07.099578  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1213692.pem
	I1119 23:07:07.107551  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1213692.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 23:07:07.123243  142733 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 23:07:07.129696  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 23:07:07.137849  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 23:07:07.145809  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 23:07:07.153731  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 23:07:07.161120  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 23:07:07.168309  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 23:07:07.176142  142733 kubeadm.go:935] updating node {m03 192.168.39.160 8443 v1.34.1 crio true true} ...
	I1119 23:07:07.176256  142733 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-487903-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.160
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 23:07:07.176285  142733 kube-vip.go:115] generating kube-vip config ...
	I1119 23:07:07.176329  142733 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1119 23:07:07.203479  142733 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1119 23:07:07.203570  142733 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1119 23:07:07.203646  142733 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 23:07:07.217413  142733 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 23:07:07.217503  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1119 23:07:07.230746  142733 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1119 23:07:07.256658  142733 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 23:07:07.282507  142733 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1119 23:07:07.305975  142733 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1119 23:07:07.311016  142733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:07:07.328648  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:07:07.494364  142733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:07:07.517777  142733 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 23:07:07.518159  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:07.518271  142733 cache.go:107] acquiring lock: {Name:mk7073b8eca670d2c11dece9275947688dc7c859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 23:07:07.518379  142733 cache.go:115] /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1119 23:07:07.518395  142733 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 133.678ยตs
	I1119 23:07:07.518407  142733 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1119 23:07:07.518421  142733 cache.go:87] Successfully saved all images to host disk.
	I1119 23:07:07.518647  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:07.520684  142733 out.go:179] * Verifying Kubernetes components...
	I1119 23:07:07.520832  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:07:07.521966  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:07:07.523804  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:07:07.524372  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:07:07.524416  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:07:07.524599  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:07:07.723792  142733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:07:07.724326  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:07:07.724350  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:07:07.726364  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:07:07.728774  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:07:07.729239  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:07:07.729270  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:07:07.729424  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:07:07.746212  142733 kapi.go:59] client config for ha-487903: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key", CAFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1119 23:07:07.746278  142733 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.15:8443
	I1119 23:07:07.746586  142733 node_ready.go:35] waiting up to 6m0s for node "ha-487903-m03" to be "Ready" ...
	I1119 23:07:07.858504  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:07:07.858530  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:07:07.860355  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:07:07.862516  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:07.862974  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:07.863000  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:07.863200  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/id_rsa Username:docker}
	I1119 23:07:08.011441  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:07:08.011468  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:07:08.013393  142733 cache_images.go:264] succeeded pushing to: ha-487903 ha-487903-m02 ha-487903-m03
	W1119 23:07:09.751904  142733 node_ready.go:57] node "ha-487903-m03" has "Ready":"Unknown" status (will retry)
	W1119 23:07:12.252353  142733 node_ready.go:57] node "ha-487903-m03" has "Ready":"Unknown" status (will retry)
	W1119 23:07:14.254075  142733 node_ready.go:57] node "ha-487903-m03" has "Ready":"Unknown" status (will retry)
	W1119 23:07:16.256443  142733 node_ready.go:57] node "ha-487903-m03" has "Ready":"Unknown" status (will retry)
	W1119 23:07:18.752485  142733 node_ready.go:57] node "ha-487903-m03" has "Ready":"Unknown" status (will retry)
	I1119 23:07:19.751738  142733 node_ready.go:49] node "ha-487903-m03" is "Ready"
	I1119 23:07:19.751783  142733 node_ready.go:38] duration metric: took 12.005173883s for node "ha-487903-m03" to be "Ready" ...
	I1119 23:07:19.751803  142733 api_server.go:52] waiting for apiserver process to appear ...
	I1119 23:07:19.751911  142733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 23:07:19.833604  142733 api_server.go:72] duration metric: took 12.315777974s to wait for apiserver process to appear ...
	I1119 23:07:19.833635  142733 api_server.go:88] waiting for apiserver healthz status ...
	I1119 23:07:19.833668  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:07:19.841482  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 200:
	ok
	I1119 23:07:19.842905  142733 api_server.go:141] control plane version: v1.34.1
	I1119 23:07:19.842932  142733 api_server.go:131] duration metric: took 9.287176ms to wait for apiserver health ...
	I1119 23:07:19.842951  142733 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 23:07:19.855636  142733 system_pods.go:59] 26 kube-system pods found
	I1119 23:07:19.855671  142733 system_pods.go:61] "coredns-66bc5c9577-5gt2t" [4a5bca7b-6369-4f70-b467-829bf0c07711] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:07:19.855679  142733 system_pods.go:61] "coredns-66bc5c9577-zjxkb" [9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5] Running
	I1119 23:07:19.855689  142733 system_pods.go:61] "etcd-ha-487903" [c5b86825-5332-4f13-9566-5427deb2b911] Running
	I1119 23:07:19.855695  142733 system_pods.go:61] "etcd-ha-487903-m02" [a83259c3-4655-44c3-99ac-0f3a1c270ce1] Running
	I1119 23:07:19.855700  142733 system_pods.go:61] "etcd-ha-487903-m03" [1569e0f7-59af-4ee0-8f74-bcc25085147d] Running
	I1119 23:07:19.855705  142733 system_pods.go:61] "kindnet-9zx8x" [afdae080-5eff-4da4-8aff-726047c47eee] Running
	I1119 23:07:19.855710  142733 system_pods.go:61] "kindnet-kslhw" [4b82f473-b098-45a9-adf5-8c239353c3cd] Running
	I1119 23:07:19.855714  142733 system_pods.go:61] "kindnet-p9nqh" [1dd7683b-c7e7-487c-904a-506a24f833d8] Running
	I1119 23:07:19.855724  142733 system_pods.go:61] "kindnet-s9k2l" [98a511b0-fa67-4874-8013-e8a4a0adf5f1] Running
	I1119 23:07:19.855733  142733 system_pods.go:61] "kube-apiserver-ha-487903" [ab2ffc5e-d01f-4f0d-840e-f029adf3e0c1] Running
	I1119 23:07:19.855738  142733 system_pods.go:61] "kube-apiserver-ha-487903-m02" [cf63e467-a8e4-4ef3-910a-e35e6e75afdc] Running
	I1119 23:07:19.855743  142733 system_pods.go:61] "kube-apiserver-ha-487903-m03" [a121515e-29eb-42b5-b153-f7a0046ac5c2] Running
	I1119 23:07:19.855747  142733 system_pods.go:61] "kube-controller-manager-ha-487903" [8c9a78c7-ca0e-42cd-90e1-c4c6496de2d8] Running
	I1119 23:07:19.855753  142733 system_pods.go:61] "kube-controller-manager-ha-487903-m02" [bb173b4b-c202-408f-941e-f0af81d49115] Running
	I1119 23:07:19.855760  142733 system_pods.go:61] "kube-controller-manager-ha-487903-m03" [065ea88f-3d45-429d-8e50-ca0aa1a73bcf] Running
	I1119 23:07:19.855764  142733 system_pods.go:61] "kube-proxy-77wjf" [cff3e933-3100-44ce-9215-9db308c328d9] Running
	I1119 23:07:19.855769  142733 system_pods.go:61] "kube-proxy-fk7mh" [8743ca8a-c5e5-4da6-a983-a6191d2a852a] Running
	I1119 23:07:19.855774  142733 system_pods.go:61] "kube-proxy-tkx9r" [93e33992-972a-4c46-8ecb-143a0746d256] Running
	I1119 23:07:19.855778  142733 system_pods.go:61] "kube-proxy-zxtk6" [a2e2aa8e-2dc5-4233-a89a-da39ab61fd72] Running
	I1119 23:07:19.855783  142733 system_pods.go:61] "kube-scheduler-ha-487903" [9adc6023-3fda-4823-9f02-f9ec96db2287] Running
	I1119 23:07:19.855793  142733 system_pods.go:61] "kube-scheduler-ha-487903-m02" [45efe4be-fc82-4bc3-9c0a-a731153d7a72] Running
	I1119 23:07:19.855797  142733 system_pods.go:61] "kube-scheduler-ha-487903-m03" [bdb98728-6527-4ad5-9a13-3b7d2aba1643] Running
	I1119 23:07:19.855802  142733 system_pods.go:61] "kube-vip-ha-487903" [38307c4c-3c7f-4880-a86f-c8066b3da90a] Running
	I1119 23:07:19.855806  142733 system_pods.go:61] "kube-vip-ha-487903-m02" [a4dc6a51-edac-4815-8723-d8fb6f7b6935] Running
	I1119 23:07:19.855814  142733 system_pods.go:61] "kube-vip-ha-487903-m03" [45a6021a-3c71-47ce-9553-d75c327c14f3] Running
	I1119 23:07:19.855818  142733 system_pods.go:61] "storage-provisioner" [2f8e8800-093c-4c68-ac9c-300049543509] Running
	I1119 23:07:19.855827  142733 system_pods.go:74] duration metric: took 12.86809ms to wait for pod list to return data ...
	I1119 23:07:19.855842  142733 default_sa.go:34] waiting for default service account to be created ...
	I1119 23:07:19.860573  142733 default_sa.go:45] found service account: "default"
	I1119 23:07:19.860597  142733 default_sa.go:55] duration metric: took 4.749483ms for default service account to be created ...
	I1119 23:07:19.860606  142733 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 23:07:19.870790  142733 system_pods.go:86] 26 kube-system pods found
	I1119 23:07:19.870825  142733 system_pods.go:89] "coredns-66bc5c9577-5gt2t" [4a5bca7b-6369-4f70-b467-829bf0c07711] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:07:19.870831  142733 system_pods.go:89] "coredns-66bc5c9577-zjxkb" [9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5] Running
	I1119 23:07:19.870836  142733 system_pods.go:89] "etcd-ha-487903" [c5b86825-5332-4f13-9566-5427deb2b911] Running
	I1119 23:07:19.870840  142733 system_pods.go:89] "etcd-ha-487903-m02" [a83259c3-4655-44c3-99ac-0f3a1c270ce1] Running
	I1119 23:07:19.870843  142733 system_pods.go:89] "etcd-ha-487903-m03" [1569e0f7-59af-4ee0-8f74-bcc25085147d] Running
	I1119 23:07:19.870847  142733 system_pods.go:89] "kindnet-9zx8x" [afdae080-5eff-4da4-8aff-726047c47eee] Running
	I1119 23:07:19.870851  142733 system_pods.go:89] "kindnet-kslhw" [4b82f473-b098-45a9-adf5-8c239353c3cd] Running
	I1119 23:07:19.870854  142733 system_pods.go:89] "kindnet-p9nqh" [1dd7683b-c7e7-487c-904a-506a24f833d8] Running
	I1119 23:07:19.870857  142733 system_pods.go:89] "kindnet-s9k2l" [98a511b0-fa67-4874-8013-e8a4a0adf5f1] Running
	I1119 23:07:19.870861  142733 system_pods.go:89] "kube-apiserver-ha-487903" [ab2ffc5e-d01f-4f0d-840e-f029adf3e0c1] Running
	I1119 23:07:19.870865  142733 system_pods.go:89] "kube-apiserver-ha-487903-m02" [cf63e467-a8e4-4ef3-910a-e35e6e75afdc] Running
	I1119 23:07:19.870870  142733 system_pods.go:89] "kube-apiserver-ha-487903-m03" [a121515e-29eb-42b5-b153-f7a0046ac5c2] Running
	I1119 23:07:19.870895  142733 system_pods.go:89] "kube-controller-manager-ha-487903" [8c9a78c7-ca0e-42cd-90e1-c4c6496de2d8] Running
	I1119 23:07:19.870902  142733 system_pods.go:89] "kube-controller-manager-ha-487903-m02" [bb173b4b-c202-408f-941e-f0af81d49115] Running
	I1119 23:07:19.870911  142733 system_pods.go:89] "kube-controller-manager-ha-487903-m03" [065ea88f-3d45-429d-8e50-ca0aa1a73bcf] Running
	I1119 23:07:19.870916  142733 system_pods.go:89] "kube-proxy-77wjf" [cff3e933-3100-44ce-9215-9db308c328d9] Running
	I1119 23:07:19.870924  142733 system_pods.go:89] "kube-proxy-fk7mh" [8743ca8a-c5e5-4da6-a983-a6191d2a852a] Running
	I1119 23:07:19.870929  142733 system_pods.go:89] "kube-proxy-tkx9r" [93e33992-972a-4c46-8ecb-143a0746d256] Running
	I1119 23:07:19.870936  142733 system_pods.go:89] "kube-proxy-zxtk6" [a2e2aa8e-2dc5-4233-a89a-da39ab61fd72] Running
	I1119 23:07:19.870941  142733 system_pods.go:89] "kube-scheduler-ha-487903" [9adc6023-3fda-4823-9f02-f9ec96db2287] Running
	I1119 23:07:19.870946  142733 system_pods.go:89] "kube-scheduler-ha-487903-m02" [45efe4be-fc82-4bc3-9c0a-a731153d7a72] Running
	I1119 23:07:19.870953  142733 system_pods.go:89] "kube-scheduler-ha-487903-m03" [bdb98728-6527-4ad5-9a13-3b7d2aba1643] Running
	I1119 23:07:19.870957  142733 system_pods.go:89] "kube-vip-ha-487903" [38307c4c-3c7f-4880-a86f-c8066b3da90a] Running
	I1119 23:07:19.870963  142733 system_pods.go:89] "kube-vip-ha-487903-m02" [a4dc6a51-edac-4815-8723-d8fb6f7b6935] Running
	I1119 23:07:19.870966  142733 system_pods.go:89] "kube-vip-ha-487903-m03" [45a6021a-3c71-47ce-9553-d75c327c14f3] Running
	I1119 23:07:19.870969  142733 system_pods.go:89] "storage-provisioner" [2f8e8800-093c-4c68-ac9c-300049543509] Running
	I1119 23:07:19.870982  142733 system_pods.go:126] duration metric: took 10.369487ms to wait for k8s-apps to be running ...
	I1119 23:07:19.870995  142733 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 23:07:19.871070  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 23:07:19.923088  142733 system_svc.go:56] duration metric: took 52.080591ms WaitForService to wait for kubelet
	I1119 23:07:19.923137  142733 kubeadm.go:587] duration metric: took 12.405311234s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 23:07:19.923168  142733 node_conditions.go:102] verifying NodePressure condition ...
	I1119 23:07:19.930259  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:07:19.930299  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:07:19.930316  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:07:19.930323  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:07:19.930329  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:07:19.930334  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:07:19.930343  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:07:19.930352  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:07:19.930359  142733 node_conditions.go:105] duration metric: took 7.184829ms to run NodePressure ...
	I1119 23:07:19.930381  142733 start.go:242] waiting for startup goroutines ...
	I1119 23:07:19.930425  142733 start.go:256] writing updated cluster config ...
	I1119 23:07:19.932180  142733 out.go:203] 
	I1119 23:07:19.934088  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:19.934226  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:07:19.935991  142733 out.go:179] * Starting "ha-487903-m04" worker node in "ha-487903" cluster
	I1119 23:07:19.937566  142733 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:07:19.937584  142733 cache.go:65] Caching tarball of preloaded images
	I1119 23:07:19.937693  142733 preload.go:238] Found /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 23:07:19.937716  142733 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 23:07:19.937810  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:07:19.938027  142733 start.go:360] acquireMachinesLock for ha-487903-m04: {Name:mk31ae761a53f3900bcf08a8c89eb1fc69e23fe3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1119 23:07:19.938076  142733 start.go:364] duration metric: took 28.868ยตs to acquireMachinesLock for "ha-487903-m04"
	I1119 23:07:19.938095  142733 start.go:96] Skipping create...Using existing machine configuration
	I1119 23:07:19.938109  142733 fix.go:54] fixHost starting: m04
	I1119 23:07:19.940296  142733 fix.go:112] recreateIfNeeded on ha-487903-m04: state=Stopped err=<nil>
	W1119 23:07:19.940327  142733 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 23:07:19.942168  142733 out.go:252] * Restarting existing kvm2 VM for "ha-487903-m04" ...
	I1119 23:07:19.942220  142733 main.go:143] libmachine: starting domain...
	I1119 23:07:19.942265  142733 main.go:143] libmachine: ensuring networks are active...
	I1119 23:07:19.943145  142733 main.go:143] libmachine: Ensuring network default is active
	I1119 23:07:19.943566  142733 main.go:143] libmachine: Ensuring network mk-ha-487903 is active
	I1119 23:07:19.944170  142733 main.go:143] libmachine: getting domain XML...
	I1119 23:07:19.945811  142733 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>ha-487903-m04</name>
	  <uuid>2ce148a1-b982-46f6-ada0-6a5a5b14ddce</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/ha-487903-m04.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:eb:f3:c3'/>
	      <source network='mk-ha-487903'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:03:3a:d4'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1119 23:07:21.541216  142733 main.go:143] libmachine: waiting for domain to start...
	I1119 23:07:21.542947  142733 main.go:143] libmachine: domain is now running
	I1119 23:07:21.542968  142733 main.go:143] libmachine: waiting for IP...
	I1119 23:07:21.543929  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:21.544529  142733 main.go:143] libmachine: domain ha-487903-m04 has current primary IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:21.544546  142733 main.go:143] libmachine: found domain IP: 192.168.39.187
	I1119 23:07:21.544554  142733 main.go:143] libmachine: reserving static IP address...
	I1119 23:07:21.545091  142733 main.go:143] libmachine: found host DHCP lease matching {name: "ha-487903-m04", mac: "52:54:00:eb:f3:c3", ip: "192.168.39.187"} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:51:45 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:21.545120  142733 main.go:143] libmachine: skip adding static IP to network mk-ha-487903 - found existing host DHCP lease matching {name: "ha-487903-m04", mac: "52:54:00:eb:f3:c3", ip: "192.168.39.187"}
	I1119 23:07:21.545133  142733 main.go:143] libmachine: reserved static IP address 192.168.39.187 for domain ha-487903-m04
	I1119 23:07:21.545137  142733 main.go:143] libmachine: waiting for SSH...
	I1119 23:07:21.545142  142733 main.go:143] libmachine: Getting to WaitForSSH function...
	I1119 23:07:21.547650  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:21.548218  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:51:45 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:21.548249  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:21.548503  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:21.548718  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1119 23:07:21.548730  142733 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1119 23:07:24.652184  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.187:22: connect: no route to host
	I1119 23:07:30.732203  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.187:22: connect: no route to host
	I1119 23:07:34.764651  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.187:22: connect: connection refused
	I1119 23:07:37.880284  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:07:37.884099  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:37.884565  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:37.884591  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:37.884934  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:07:37.885280  142733 machine.go:94] provisionDockerMachine start ...
	I1119 23:07:37.887971  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:37.888368  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:37.888391  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:37.888542  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:37.888720  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1119 23:07:37.888729  142733 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 23:07:37.998350  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1119 23:07:37.998394  142733 buildroot.go:166] provisioning hostname "ha-487903-m04"
	I1119 23:07:38.002080  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.002563  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:38.002588  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.002794  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:38.003043  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1119 23:07:38.003057  142733 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-487903-m04 && echo "ha-487903-m04" | sudo tee /etc/hostname
	I1119 23:07:38.135349  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-487903-m04
	
	I1119 23:07:38.138757  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.139357  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:38.139392  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.139707  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:38.140010  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1119 23:07:38.140053  142733 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-487903-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-487903-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-487903-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 23:07:38.264087  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:07:38.264126  142733 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21918-117497/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-117497/.minikube}
	I1119 23:07:38.264149  142733 buildroot.go:174] setting up certificates
	I1119 23:07:38.264161  142733 provision.go:84] configureAuth start
	I1119 23:07:38.267541  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.268176  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:38.268215  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.270752  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.271136  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:38.271156  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.271421  142733 provision.go:143] copyHostCerts
	I1119 23:07:38.271453  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:07:38.271483  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem, removing ...
	I1119 23:07:38.271492  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:07:38.271573  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem (1675 bytes)
	I1119 23:07:38.271646  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:07:38.271664  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem, removing ...
	I1119 23:07:38.271667  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:07:38.271693  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem (1078 bytes)
	I1119 23:07:38.271735  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:07:38.271751  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem, removing ...
	I1119 23:07:38.271757  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:07:38.271779  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem (1123 bytes)
	I1119 23:07:38.271823  142733 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem org=jenkins.ha-487903-m04 san=[127.0.0.1 192.168.39.187 ha-487903-m04 localhost minikube]
	I1119 23:07:38.932314  142733 provision.go:177] copyRemoteCerts
	I1119 23:07:38.932380  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 23:07:38.935348  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.935810  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:38.935836  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.936006  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/id_rsa Username:docker}
	I1119 23:07:39.025808  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1119 23:07:39.025896  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 23:07:39.060783  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1119 23:07:39.060907  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1119 23:07:39.093470  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1119 23:07:39.093540  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1119 23:07:39.126116  142733 provision.go:87] duration metric: took 861.930238ms to configureAuth
	I1119 23:07:39.126158  142733 buildroot.go:189] setting minikube options for container-runtime
	I1119 23:07:39.126455  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:39.129733  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.130126  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:39.130155  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.130312  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:39.130560  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1119 23:07:39.130587  142733 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 23:07:39.433038  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 23:07:39.433084  142733 machine.go:97] duration metric: took 1.547777306s to provisionDockerMachine
	I1119 23:07:39.433101  142733 start.go:293] postStartSetup for "ha-487903-m04" (driver="kvm2")
	I1119 23:07:39.433114  142733 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 23:07:39.433178  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 23:07:39.436063  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.436658  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:39.436689  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.436985  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/id_rsa Username:docker}
	I1119 23:07:39.524100  142733 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 23:07:39.529723  142733 info.go:137] Remote host: Buildroot 2025.02
	I1119 23:07:39.529752  142733 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/addons for local assets ...
	I1119 23:07:39.529847  142733 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/files for local assets ...
	I1119 23:07:39.529973  142733 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> 1213692.pem in /etc/ssl/certs
	I1119 23:07:39.529988  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /etc/ssl/certs/1213692.pem
	I1119 23:07:39.530101  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 23:07:39.544274  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:07:39.576039  142733 start.go:296] duration metric: took 142.916645ms for postStartSetup
	I1119 23:07:39.576112  142733 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1119 23:07:39.578695  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.579305  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:39.579334  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.579504  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/id_rsa Username:docker}
	I1119 23:07:39.668947  142733 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I1119 23:07:39.669041  142733 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1119 23:07:39.733896  142733 fix.go:56] duration metric: took 19.795762355s for fixHost
	I1119 23:07:39.737459  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.738018  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:39.738061  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.738362  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:39.738661  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1119 23:07:39.738687  142733 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1119 23:07:39.869213  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763593659.839682658
	
	I1119 23:07:39.869234  142733 fix.go:216] guest clock: 1763593659.839682658
	I1119 23:07:39.869241  142733 fix.go:229] Guest: 2025-11-19 23:07:39.839682658 +0000 UTC Remote: 2025-11-19 23:07:39.733931353 +0000 UTC m=+107.078175487 (delta=105.751305ms)
	I1119 23:07:39.869257  142733 fix.go:200] guest clock delta is within tolerance: 105.751305ms
	I1119 23:07:39.869262  142733 start.go:83] releasing machines lock for "ha-487903-m04", held for 19.931174771s
	I1119 23:07:39.872591  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.873064  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:39.873085  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.875110  142733 out.go:179] * Found network options:
	I1119 23:07:39.876331  142733 out.go:179]   - NO_PROXY=192.168.39.15,192.168.39.191,192.168.39.160
	W1119 23:07:39.877435  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:07:39.877458  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:07:39.877478  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:07:39.877889  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:07:39.877920  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:07:39.877932  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	I1119 23:07:39.877962  142733 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 23:07:39.877987  142733 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 23:07:39.881502  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.881991  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.882088  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:39.882128  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.882283  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/id_rsa Username:docker}
	I1119 23:07:39.882500  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:39.882524  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.882696  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/id_rsa Username:docker}
	I1119 23:07:40.118089  142733 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 23:07:40.126955  142733 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 23:07:40.127054  142733 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 23:07:40.150315  142733 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 23:07:40.150351  142733 start.go:496] detecting cgroup driver to use...
	I1119 23:07:40.150436  142733 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 23:07:40.176112  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 23:07:40.195069  142733 docker.go:218] disabling cri-docker service (if available) ...
	I1119 23:07:40.195148  142733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 23:07:40.217113  142733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 23:07:40.240578  142733 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 23:07:40.404108  142733 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 23:07:40.642170  142733 docker.go:234] disabling docker service ...
	I1119 23:07:40.642260  142733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 23:07:40.659709  142733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 23:07:40.677698  142733 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 23:07:40.845769  142733 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 23:07:41.005373  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 23:07:41.028115  142733 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 23:07:41.057337  142733 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 23:07:41.057425  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:41.072373  142733 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 23:07:41.072466  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:41.086681  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:41.100921  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:41.115817  142733 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 23:07:41.132398  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:41.149261  142733 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:41.174410  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:41.189666  142733 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 23:07:41.202599  142733 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1119 23:07:41.202679  142733 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1119 23:07:41.228059  142733 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 23:07:41.243031  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:07:41.403712  142733 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 23:07:41.527678  142733 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 23:07:41.527765  142733 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 23:07:41.534539  142733 start.go:564] Will wait 60s for crictl version
	I1119 23:07:41.534620  142733 ssh_runner.go:195] Run: which crictl
	I1119 23:07:41.539532  142733 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1119 23:07:41.585994  142733 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1119 23:07:41.586086  142733 ssh_runner.go:195] Run: crio --version
	I1119 23:07:41.621736  142733 ssh_runner.go:195] Run: crio --version
	I1119 23:07:41.656086  142733 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1119 23:07:41.657482  142733 out.go:179]   - env NO_PROXY=192.168.39.15
	I1119 23:07:41.658756  142733 out.go:179]   - env NO_PROXY=192.168.39.15,192.168.39.191
	I1119 23:07:41.659970  142733 out.go:179]   - env NO_PROXY=192.168.39.15,192.168.39.191,192.168.39.160
	I1119 23:07:41.664105  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:41.664530  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:41.664550  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:41.664716  142733 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1119 23:07:41.670624  142733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:07:41.688618  142733 mustload.go:66] Loading cluster: ha-487903
	I1119 23:07:41.688858  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:41.690292  142733 host.go:66] Checking if "ha-487903" exists ...
	I1119 23:07:41.690482  142733 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903 for IP: 192.168.39.187
	I1119 23:07:41.690491  142733 certs.go:195] generating shared ca certs ...
	I1119 23:07:41.690504  142733 certs.go:227] acquiring lock for ca certs: {Name:mk7fd1f1adfef6333505c39c1a982465562c820e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:07:41.690631  142733 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key
	I1119 23:07:41.690692  142733 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key
	I1119 23:07:41.690711  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1119 23:07:41.690731  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1119 23:07:41.690750  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1119 23:07:41.690768  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1119 23:07:41.690840  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem (1338 bytes)
	W1119 23:07:41.690886  142733 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369_empty.pem, impossibly tiny 0 bytes
	I1119 23:07:41.690897  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 23:07:41.690917  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem (1078 bytes)
	I1119 23:07:41.690937  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem (1123 bytes)
	I1119 23:07:41.690958  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem (1675 bytes)
	I1119 23:07:41.690994  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:07:41.691025  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:07:41.691038  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem -> /usr/share/ca-certificates/121369.pem
	I1119 23:07:41.691048  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /usr/share/ca-certificates/1213692.pem
	I1119 23:07:41.691068  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 23:07:41.726185  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 23:07:41.762445  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 23:07:41.804578  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 23:07:41.841391  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 23:07:41.881178  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem --> /usr/share/ca-certificates/121369.pem (1338 bytes)
	I1119 23:07:41.917258  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /usr/share/ca-certificates/1213692.pem (1708 bytes)
	I1119 23:07:41.953489  142733 ssh_runner.go:195] Run: openssl version
	I1119 23:07:41.961333  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 23:07:41.977066  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:07:41.983550  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:07:41.983610  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:07:41.991656  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 23:07:42.006051  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121369.pem && ln -fs /usr/share/ca-certificates/121369.pem /etc/ssl/certs/121369.pem"
	I1119 23:07:42.021516  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121369.pem
	I1119 23:07:42.028801  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:54 /usr/share/ca-certificates/121369.pem
	I1119 23:07:42.028900  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121369.pem
	I1119 23:07:42.036899  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/121369.pem /etc/ssl/certs/51391683.0"
	I1119 23:07:42.052553  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1213692.pem && ln -fs /usr/share/ca-certificates/1213692.pem /etc/ssl/certs/1213692.pem"
	I1119 23:07:42.067472  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1213692.pem
	I1119 23:07:42.073674  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:54 /usr/share/ca-certificates/1213692.pem
	I1119 23:07:42.073751  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1213692.pem
	I1119 23:07:42.081607  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1213692.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 23:07:42.096183  142733 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 23:07:42.101534  142733 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 23:07:42.101590  142733 kubeadm.go:935] updating node {m04 192.168.39.187 0 v1.34.1 crio false true} ...
	I1119 23:07:42.101683  142733 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-487903-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 23:07:42.101762  142733 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 23:07:42.115471  142733 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 23:07:42.115548  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1119 23:07:42.129019  142733 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1119 23:07:42.153030  142733 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 23:07:42.178425  142733 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1119 23:07:42.183443  142733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:07:42.200493  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:07:42.356810  142733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:07:42.394017  142733 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1119 23:07:42.394368  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:42.394458  142733 cache.go:107] acquiring lock: {Name:mk7073b8eca670d2c11dece9275947688dc7c859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 23:07:42.394553  142733 cache.go:115] /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1119 23:07:42.394567  142733 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 116.988ยตs
	I1119 23:07:42.394578  142733 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1119 23:07:42.394596  142733 cache.go:87] Successfully saved all images to host disk.
	I1119 23:07:42.394838  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:42.395796  142733 out.go:179] * Verifying Kubernetes components...
	I1119 23:07:42.397077  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:07:42.397151  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:07:42.400663  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:07:42.401297  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:07:42.401366  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:07:42.401574  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:07:42.612769  142733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:07:42.613454  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:07:42.613478  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:07:42.615709  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:07:42.618644  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:07:42.619227  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:07:42.619265  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:07:42.619437  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:07:42.650578  142733 kapi.go:59] client config for ha-487903: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key", CAFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1119 23:07:42.650662  142733 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.15:8443
	I1119 23:07:42.651008  142733 node_ready.go:35] waiting up to 6m0s for node "ha-487903-m04" to be "Ready" ...
	I1119 23:07:42.759664  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:07:42.759695  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:07:42.762502  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:07:42.766101  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:42.766612  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:42.766645  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:42.766903  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/id_rsa Username:docker}
	I1119 23:07:42.916732  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:07:42.916761  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:07:42.919291  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:07:42.922664  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:42.923283  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:42.923322  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:42.923548  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/id_rsa Username:docker}
	I1119 23:07:43.068345  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:07:43.068378  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:07:43.068389  142733 cache_images.go:264] succeeded pushing to: ha-487903 ha-487903-m02 ha-487903-m03 ha-487903-m04
	I1119 23:07:43.156120  142733 node_ready.go:49] node "ha-487903-m04" is "Ready"
	I1119 23:07:43.156156  142733 node_ready.go:38] duration metric: took 505.123719ms for node "ha-487903-m04" to be "Ready" ...
	I1119 23:07:43.156173  142733 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 23:07:43.156241  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 23:07:43.175222  142733 system_svc.go:56] duration metric: took 19.040723ms WaitForService to wait for kubelet
	I1119 23:07:43.175261  142733 kubeadm.go:587] duration metric: took 781.202644ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 23:07:43.175288  142733 node_conditions.go:102] verifying NodePressure condition ...
	I1119 23:07:43.180835  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:07:43.180870  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:07:43.180910  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:07:43.180916  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:07:43.180924  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:07:43.180942  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:07:43.180953  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:07:43.180959  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:07:43.180965  142733 node_conditions.go:105] duration metric: took 5.670636ms to run NodePressure ...
	I1119 23:07:43.180984  142733 start.go:242] waiting for startup goroutines ...
	I1119 23:07:43.181017  142733 start.go:256] writing updated cluster config ...
	I1119 23:07:43.181360  142733 ssh_runner.go:195] Run: rm -f paused
	I1119 23:07:43.187683  142733 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 23:07:43.188308  142733 kapi.go:59] client config for ha-487903: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key", CAFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1119 23:07:43.202770  142733 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5gt2t" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.210054  142733 pod_ready.go:94] pod "coredns-66bc5c9577-5gt2t" is "Ready"
	I1119 23:07:43.210077  142733 pod_ready.go:86] duration metric: took 7.281319ms for pod "coredns-66bc5c9577-5gt2t" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.210085  142733 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zjxkb" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.216456  142733 pod_ready.go:94] pod "coredns-66bc5c9577-zjxkb" is "Ready"
	I1119 23:07:43.216477  142733 pod_ready.go:86] duration metric: took 6.387459ms for pod "coredns-66bc5c9577-zjxkb" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.220711  142733 pod_ready.go:83] waiting for pod "etcd-ha-487903" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.230473  142733 pod_ready.go:94] pod "etcd-ha-487903" is "Ready"
	I1119 23:07:43.230503  142733 pod_ready.go:86] duration metric: took 9.759051ms for pod "etcd-ha-487903" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.230514  142733 pod_ready.go:83] waiting for pod "etcd-ha-487903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.238350  142733 pod_ready.go:94] pod "etcd-ha-487903-m02" is "Ready"
	I1119 23:07:43.238386  142733 pod_ready.go:86] duration metric: took 7.863104ms for pod "etcd-ha-487903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.238400  142733 pod_ready.go:83] waiting for pod "etcd-ha-487903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.389841  142733 request.go:683] "Waited before sending request" delay="151.318256ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-487903-m03"
	I1119 23:07:43.588929  142733 request.go:683] "Waited before sending request" delay="193.203585ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m03"
	I1119 23:07:43.592859  142733 pod_ready.go:94] pod "etcd-ha-487903-m03" is "Ready"
	I1119 23:07:43.592895  142733 pod_ready.go:86] duration metric: took 354.487844ms for pod "etcd-ha-487903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.789462  142733 request.go:683] "Waited before sending request" delay="196.405608ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1119 23:07:43.797307  142733 pod_ready.go:83] waiting for pod "kube-apiserver-ha-487903" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.989812  142733 request.go:683] "Waited before sending request" delay="192.389949ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-487903"
	I1119 23:07:44.189117  142733 request.go:683] "Waited before sending request" delay="193.300165ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903"
	I1119 23:07:44.194456  142733 pod_ready.go:94] pod "kube-apiserver-ha-487903" is "Ready"
	I1119 23:07:44.194483  142733 pod_ready.go:86] duration metric: took 397.15415ms for pod "kube-apiserver-ha-487903" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:44.194492  142733 pod_ready.go:83] waiting for pod "kube-apiserver-ha-487903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:44.388959  142733 request.go:683] "Waited before sending request" delay="194.329528ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-487903-m02"
	I1119 23:07:44.589884  142733 request.go:683] "Waited before sending request" delay="195.382546ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m02"
	I1119 23:07:44.596472  142733 pod_ready.go:94] pod "kube-apiserver-ha-487903-m02" is "Ready"
	I1119 23:07:44.596506  142733 pod_ready.go:86] duration metric: took 402.007843ms for pod "kube-apiserver-ha-487903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:44.596519  142733 pod_ready.go:83] waiting for pod "kube-apiserver-ha-487903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:44.788946  142733 request.go:683] "Waited before sending request" delay="192.297042ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-487903-m03"
	I1119 23:07:44.988960  142733 request.go:683] "Waited before sending request" delay="194.310641ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m03"
	I1119 23:07:44.996400  142733 pod_ready.go:94] pod "kube-apiserver-ha-487903-m03" is "Ready"
	I1119 23:07:44.996441  142733 pod_ready.go:86] duration metric: took 399.911723ms for pod "kube-apiserver-ha-487903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:45.188855  142733 request.go:683] "Waited before sending request" delay="192.290488ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1119 23:07:45.196689  142733 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-487903" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:45.389182  142733 request.go:683] "Waited before sending request" delay="192.281881ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-487903"
	I1119 23:07:45.589591  142733 request.go:683] "Waited before sending request" delay="194.384266ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903"
	I1119 23:07:45.595629  142733 pod_ready.go:94] pod "kube-controller-manager-ha-487903" is "Ready"
	I1119 23:07:45.595661  142733 pod_ready.go:86] duration metric: took 398.942038ms for pod "kube-controller-manager-ha-487903" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:45.595674  142733 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-487903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:45.789154  142733 request.go:683] "Waited before sending request" delay="193.378185ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-487903-m02"
	I1119 23:07:45.989593  142733 request.go:683] "Waited before sending request" delay="195.373906ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m02"
	I1119 23:07:45.995418  142733 pod_ready.go:94] pod "kube-controller-manager-ha-487903-m02" is "Ready"
	I1119 23:07:45.995451  142733 pod_ready.go:86] duration metric: took 399.769417ms for pod "kube-controller-manager-ha-487903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:45.995462  142733 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-487903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:46.188855  142733 request.go:683] "Waited before sending request" delay="193.309398ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-487903-m03"
	I1119 23:07:46.389512  142733 request.go:683] "Waited before sending request" delay="194.260664ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m03"
	I1119 23:07:46.394287  142733 pod_ready.go:94] pod "kube-controller-manager-ha-487903-m03" is "Ready"
	I1119 23:07:46.394312  142733 pod_ready.go:86] duration metric: took 398.844264ms for pod "kube-controller-manager-ha-487903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:46.589870  142733 request.go:683] "Waited before sending request" delay="195.416046ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1119 23:07:46.597188  142733 pod_ready.go:83] waiting for pod "kube-proxy-77wjf" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:46.789771  142733 request.go:683] "Waited before sending request" delay="192.426623ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-77wjf"
	I1119 23:07:46.989150  142733 request.go:683] "Waited before sending request" delay="193.435229ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m02"
	I1119 23:07:46.993720  142733 pod_ready.go:94] pod "kube-proxy-77wjf" is "Ready"
	I1119 23:07:46.993753  142733 pod_ready.go:86] duration metric: took 396.52945ms for pod "kube-proxy-77wjf" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:46.993765  142733 pod_ready.go:83] waiting for pod "kube-proxy-fk7mh" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:47.189146  142733 request.go:683] "Waited before sending request" delay="195.267437ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fk7mh"
	I1119 23:07:47.388849  142733 request.go:683] "Waited before sending request" delay="192.29395ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903"
	I1119 23:07:47.395640  142733 pod_ready.go:94] pod "kube-proxy-fk7mh" is "Ready"
	I1119 23:07:47.395670  142733 pod_ready.go:86] duration metric: took 401.897062ms for pod "kube-proxy-fk7mh" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:47.395683  142733 pod_ready.go:83] waiting for pod "kube-proxy-tkx9r" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:47.589099  142733 request.go:683] "Waited before sending request" delay="193.31568ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tkx9r"
	I1119 23:07:47.789418  142733 request.go:683] "Waited before sending request" delay="195.323511ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m03"
	I1119 23:07:47.795048  142733 pod_ready.go:94] pod "kube-proxy-tkx9r" is "Ready"
	I1119 23:07:47.795078  142733 pod_ready.go:86] duration metric: took 399.387799ms for pod "kube-proxy-tkx9r" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:47.795088  142733 pod_ready.go:83] waiting for pod "kube-proxy-zxtk6" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:47.989569  142733 request.go:683] "Waited before sending request" delay="194.336733ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zxtk6"
	I1119 23:07:48.189017  142733 request.go:683] "Waited before sending request" delay="192.313826ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m04"
	I1119 23:07:48.194394  142733 pod_ready.go:94] pod "kube-proxy-zxtk6" is "Ready"
	I1119 23:07:48.194435  142733 pod_ready.go:86] duration metric: took 399.338885ms for pod "kube-proxy-zxtk6" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:48.388945  142733 request.go:683] "Waited before sending request" delay="194.328429ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1119 23:07:48.555571  142733 pod_ready.go:83] waiting for pod "kube-scheduler-ha-487903" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:48.789654  142733 request.go:683] "Waited before sending request" delay="195.382731ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903"
	I1119 23:07:48.795196  142733 pod_ready.go:94] pod "kube-scheduler-ha-487903" is "Ready"
	I1119 23:07:48.795234  142733 pod_ready.go:86] duration metric: took 239.629107ms for pod "kube-scheduler-ha-487903" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:48.795246  142733 pod_ready.go:83] waiting for pod "kube-scheduler-ha-487903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:48.989712  142733 request.go:683] "Waited before sending request" delay="194.356732ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-487903-m02"
	I1119 23:07:49.189524  142733 request.go:683] "Waited before sending request" delay="194.365482ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m02"
	I1119 23:07:49.195480  142733 pod_ready.go:94] pod "kube-scheduler-ha-487903-m02" is "Ready"
	I1119 23:07:49.195503  142733 pod_ready.go:86] duration metric: took 400.248702ms for pod "kube-scheduler-ha-487903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:49.195512  142733 pod_ready.go:83] waiting for pod "kube-scheduler-ha-487903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:49.388917  142733 request.go:683] "Waited before sending request" delay="193.285895ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-487903-m03"
	I1119 23:07:49.589644  142733 request.go:683] "Waited before sending request" delay="195.362698ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m03"
	I1119 23:07:49.594210  142733 pod_ready.go:94] pod "kube-scheduler-ha-487903-m03" is "Ready"
	I1119 23:07:49.594248  142733 pod_ready.go:86] duration metric: took 398.725567ms for pod "kube-scheduler-ha-487903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:49.594266  142733 pod_ready.go:40] duration metric: took 6.406545371s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 23:07:49.639756  142733 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 23:07:49.641778  142733 out.go:179] * Done! kubectl is now configured to use "ha-487903" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 23:07:51 ha-487903 crio[1051]: time="2025-11-19 23:07:51.262907874Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:02cf6c2f51b7a2e4710eaf470b1a546e8dfca228fd615b28c173739ebc295404,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-zjxkb,Uid:9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763593602834521171,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-zjxkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-19T23:06:42.346676280Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:270bc5025a2089cafec093032b0ebbddd76667a9be6dc17be0d73dcc82585702,Metadata:&PodSandboxMetadata{Name:busybox-7b57f96db7-vl8nf,Uid:946ad3f6-2e30-4020-9558-891d0523c640,Namespace:default,Attemp
t:0,},State:SANDBOX_READY,CreatedAt:1763593602818717252,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7b57f96db7-vl8nf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 946ad3f6-2e30-4020-9558-891d0523c640,pod-template-hash: 7b57f96db7,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-19T23:06:42.346621163Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2d9689b8c4fc51a9b8407de5c6767ecfaee3ba4e33596524328f2bcf5d4107fc,Metadata:&PodSandboxMetadata{Name:kube-proxy-fk7mh,Uid:8743ca8a-c5e5-4da6-a983-a6191d2a852a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763593602806093326,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-fk7mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8743ca8a-c5e5-4da6-a983-a6191d2a852a,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kube
rnetes.io/config.seen: 2025-11-19T23:06:42.346684984Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:21cea62c9e5ab973727fcb21a109a1a976f2836eed75c3f56119049c06989cb9,Metadata:&PodSandboxMetadata{Name:kindnet-p9nqh,Uid:1dd7683b-c7e7-487c-904a-506a24f833d8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763593602794146030,Labels:map[string]string{app: kindnet,controller-revision-hash: 78f866cbfd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-p9nqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dd7683b-c7e7-487c-904a-506a24f833d8,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-19T23:06:42.346678238Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:2f8e8800-093c-4c68-ac9c-300049543509,Namespace:kube-system,Attempt:0,},State:SAN
DBOX_READY,CreatedAt:1763593602775072631,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\
"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-11-19T23:06:42.346690062Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:acd42dfb49d39f9eed1c02ddf5511235d87334eb812770e7fb44e4707605eadf,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-5gt2t,Uid:4a5bca7b-6369-4f70-b467-829bf0c07711,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763593602665362097,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-5gt2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bca7b-6369-4f70-b467-829bf0c07711,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-19T23:06:42.346626798Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:aadb913b7f2aa359387ce202375bc4784c495957a61cd14ac26b05dc78dcf173,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-487903,Uid:f2660d05a38d7f409ed63a1278c85d94,Namespace:kube-system,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1763593570901566512,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2660d05a38d7f409ed63a1278c85d94,},Annotations:map[string]string{kubernetes.io/config.hash: f2660d05a38d7f409ed63a1278c85d94,kubernetes.io/config.seen: 2025-11-19T23:06:10.328673978Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-487903,Uid:da83ad8c68cff2289fa7c146858b394c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763593570895715096,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,tier: control-plane,},Annotations:map[string]string{
kubernetes.io/config.hash: da83ad8c68cff2289fa7c146858b394c,kubernetes.io/config.seen: 2025-11-19T23:06:10.328671932Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2ea97d68a5406f530212285ae86cbed35cbeddb5975f19b02bd01f6965702dba,Metadata:&PodSandboxMetadata{Name:etcd-ha-487903,Uid:85430ee602aa7edb190bbc4c6f215cf4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763593570887145698,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85430ee602aa7edb190bbc4c6f215cf4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.15:2379,kubernetes.io/config.hash: 85430ee602aa7edb190bbc4c6f215cf4,kubernetes.io/config.seen: 2025-11-19T23:06:10.328666051Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:83240b63d40d6dadf9466b9a99213868b8f23ea48c80e114d09241c058d55ba1,Metadata:&
PodSandboxMetadata{Name:kube-scheduler-ha-487903,Uid:03498ab9918acb57128aa1e7f285fe26,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763593570882632926,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03498ab9918acb57128aa1e7f285fe26,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 03498ab9918acb57128aa1e7f285fe26,kubernetes.io/config.seen: 2025-11-19T23:06:10.328673191Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaaff2a321dfb6e26b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-487903,Uid:6f157a6337bb1e4494d2f66a12bd99f7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1763593570872152513,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.15:8443,kubernetes.io/config.hash: 6f157a6337bb1e4494d2f66a12bd99f7,kubernetes.io/config.seen: 2025-11-19T23:06:10.328670514Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=50da27e2-64bb-469e-b959-b3d75ff4b0c5 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 19 23:07:51 ha-487903 crio[1051]: time="2025-11-19 23:07:51.264129235Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4524f424-f116-4fac-8619-4af2475daa09 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:07:51 ha-487903 crio[1051]: time="2025-11-19 23:07:51.264193663Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4524f424-f116-4fac-8619-4af2475daa09 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:07:51 ha-487903 crio[1051]: time="2025-11-19 23:07:51.265033778Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6554703e81880e9f6a21acce786b79533e2a7c53bb8f205f8cf58b3e7f2602bf,PodSandboxId:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763593636083086322,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ecabad51ca157f947db36f5eba7ab52cd91201da10aac9ace5e0c4be262b48,PodSandboxId:270bc5025a2089cafec093032b0ebbddd76667a9be6dc17be0d73dcc82585702,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1763593607117114313,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7b57f96db7-vl8nf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 946ad3f6-2e30-4020-9558-891d0523c640,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4db302f8e1d90f14b4a4931bf9263114ec245c8993010f4eed63ba0b2ff9d17,PodSandboxId:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763593604195293709,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:671e74cfb90edb492f69d2062ab55e74926857b9da6b2bd43dc676ca77a34e95,PodSandboxId:21cea62c9e5ab973727fcb21a109a1a976f2836eed75c3f56119049c06989cb9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,State:CONTAINER_RUNNING,CreatedAt:1763593603580117075,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p9nqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dd7683b-c7e7-487c-904a-506a24f833d8,},Annotations:map[string]string{io.kubernetes.container.hash: 127fdb84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e1ce69b078fd9d69a966ab3ae75b0a9a20f1716235f7fd123429ac0fe73bb0b,PodSandboxId:2d9689b8c4fc51a9b8407de5c6767ecfaee3ba4e33596524328f2bcf5d4107fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763593603411005301,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fk7mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8743ca8a-c5e5-4da6-a983-a6191d2a852a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationM
essagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3b8bef3853f0cd8da58227fc3508657f73899cd5de505b5b41e40a5be41f1f,PodSandboxId:02cf6c2f51b7a2e4710eaf470b1a546e8dfca228fd615b28c173739ebc295404,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763593603688687557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zjxkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics
\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c3e00977ee5764d9f023ed9d6a8cff1ae7f4fd8b5be7f36bf0c296650100b,PodSandboxId:acd42dfb49d39f9eed1c02ddf5511235d87334eb812770e7fb44e4707605eadf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763593603469651002,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5gt2t,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bca7b-6369-4f70-b467-829bf0c07711,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407c1906949db307229dfea169cf035af7ee02aa72d4a48566dbe3b8c43189a6,PodSandboxId:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc2152
7912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763593602977408281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a3ebfa791420fd99d627281216572fc41635c59a2e823681f10e0265a5601af,PodSandboxId:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaa
ff2a321dfb6e26b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763593593634321911,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f74b446d5d8cfbd14bd87db207
415a01d3d5d28a2cb58f4b03cd16328535c63,PodSandboxId:aadb913b7f2aa359387ce202375bc4784c495957a61cd14ac26b05dc78dcf173,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce5ff7916975f8df7d464b4e56a967792442b9e79944fd74684cc749b281df38,State:CONTAINER_RUNNING,CreatedAt:1763593573545391057,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2660d05a38d7f409ed63a1278c85d94,},Annotations:map[string]string{io.kubernetes.container.hash: 7c465d42,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fead33c061a4deb0b1eb4ee9dd3e9e724
dade2871a97a7aad79bef05acbd4a07,PodSandboxId:83240b63d40d6dadf9466b9a99213868b8f23ea48c80e114d09241c058d55ba1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763593571304452021,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03498ab9918acb57128aa1e7f285fe26,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d9fc5b2567d02f5794d850ec1778030b3378c830c952d472803d0582d5904b,PodSandboxId:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763593571289637293,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restar
tCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:361486fad16d169560efd4b5ba18c9754dd222d05d290599faad7b9f7ef4862e,PodSandboxId:2ea97d68a5406f530212285ae86cbed35cbeddb5975f19b02bd01f6965702dba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763593571267392885,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85430ee602aa7edb190bbc4c6f215cf4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"cont
ainerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37548c727f81afd8dd40f5cf29964a61c4e4370cbb603376f96287d018620cb4,PodSandboxId:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaaff2a321dfb6e26b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763593571195978986,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,},Annotations:map[string]string{io.kubernetes.contain
er.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4524f424-f116-4fac-8619-4af2475daa09 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:07:51 ha-487903 crio[1051]: time="2025-11-19 23:07:51.285482533Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=63d2a8ea-6c0c-4eec-9a0c-a2cc1a93e7e7 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:07:51 ha-487903 crio[1051]: time="2025-11-19 23:07:51.285573438Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=63d2a8ea-6c0c-4eec-9a0c-a2cc1a93e7e7 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:07:51 ha-487903 crio[1051]: time="2025-11-19 23:07:51.288077764Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8053feea-2218-4632-bb6a-46e90eee6914 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:07:51 ha-487903 crio[1051]: time="2025-11-19 23:07:51.288571374Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763593671288545705,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8053feea-2218-4632-bb6a-46e90eee6914 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:07:51 ha-487903 crio[1051]: time="2025-11-19 23:07:51.290186148Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0127a9ed-d640-48f3-8387-45d3525fe210 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:07:51 ha-487903 crio[1051]: time="2025-11-19 23:07:51.290259243Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0127a9ed-d640-48f3-8387-45d3525fe210 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:07:51 ha-487903 crio[1051]: time="2025-11-19 23:07:51.290546222Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6554703e81880e9f6a21acce786b79533e2a7c53bb8f205f8cf58b3e7f2602bf,PodSandboxId:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763593636083086322,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ecabad51ca157f947db36f5eba7ab52cd91201da10aac9ace5e0c4be262b48,PodSandboxId:270bc5025a2089cafec093032b0ebbddd76667a9be6dc17be0d73dcc82585702,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1763593607117114313,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7b57f96db7-vl8nf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 946ad3f6-2e30-4020-9558-891d0523c640,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4db302f8e1d90f14b4a4931bf9263114ec245c8993010f4eed63ba0b2ff9d17,PodSandboxId:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763593604195293709,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:671e74cfb90edb492f69d2062ab55e74926857b9da6b2bd43dc676ca77a34e95,PodSandboxId:21cea62c9e5ab973727fcb21a109a1a976f2836eed75c3f56119049c06989cb9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,State:CONTAINER_RUNNING,CreatedAt:1763593603580117075,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p9nqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dd7683b-c7e7-487c-904a-506a24f833d8,},Annotations:map[string]string{io.kubernetes.container.hash: 127fdb84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e1ce69b078fd9d69a966ab3ae75b0a9a20f1716235f7fd123429ac0fe73bb0b,PodSandboxId:2d9689b8c4fc51a9b8407de5c6767ecfaee3ba4e33596524328f2bcf5d4107fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763593603411005301,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fk7mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8743ca8a-c5e5-4da6-a983-a6191d2a852a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationM
essagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3b8bef3853f0cd8da58227fc3508657f73899cd5de505b5b41e40a5be41f1f,PodSandboxId:02cf6c2f51b7a2e4710eaf470b1a546e8dfca228fd615b28c173739ebc295404,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763593603688687557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zjxkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics
\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c3e00977ee5764d9f023ed9d6a8cff1ae7f4fd8b5be7f36bf0c296650100b,PodSandboxId:acd42dfb49d39f9eed1c02ddf5511235d87334eb812770e7fb44e4707605eadf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763593603469651002,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5gt2t,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bca7b-6369-4f70-b467-829bf0c07711,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407c1906949db307229dfea169cf035af7ee02aa72d4a48566dbe3b8c43189a6,PodSandboxId:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc2152
7912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763593602977408281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a3ebfa791420fd99d627281216572fc41635c59a2e823681f10e0265a5601af,PodSandboxId:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaa
ff2a321dfb6e26b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763593593634321911,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f74b446d5d8cfbd14bd87db207
415a01d3d5d28a2cb58f4b03cd16328535c63,PodSandboxId:aadb913b7f2aa359387ce202375bc4784c495957a61cd14ac26b05dc78dcf173,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce5ff7916975f8df7d464b4e56a967792442b9e79944fd74684cc749b281df38,State:CONTAINER_RUNNING,CreatedAt:1763593573545391057,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2660d05a38d7f409ed63a1278c85d94,},Annotations:map[string]string{io.kubernetes.container.hash: 7c465d42,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fead33c061a4deb0b1eb4ee9dd3e9e724
dade2871a97a7aad79bef05acbd4a07,PodSandboxId:83240b63d40d6dadf9466b9a99213868b8f23ea48c80e114d09241c058d55ba1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763593571304452021,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03498ab9918acb57128aa1e7f285fe26,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d9fc5b2567d02f5794d850ec1778030b3378c830c952d472803d0582d5904b,PodSandboxId:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763593571289637293,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restar
tCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:361486fad16d169560efd4b5ba18c9754dd222d05d290599faad7b9f7ef4862e,PodSandboxId:2ea97d68a5406f530212285ae86cbed35cbeddb5975f19b02bd01f6965702dba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763593571267392885,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85430ee602aa7edb190bbc4c6f215cf4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"cont
ainerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37548c727f81afd8dd40f5cf29964a61c4e4370cbb603376f96287d018620cb4,PodSandboxId:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaaff2a321dfb6e26b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763593571195978986,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,},Annotations:map[string]string{io.kubernetes.contain
er.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0127a9ed-d640-48f3-8387-45d3525fe210 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:07:51 ha-487903 crio[1051]: time="2025-11-19 23:07:51.337840680Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5a2e68ce-bbe4-4cbf-aadb-184bf12eb96c name=/runtime.v1.RuntimeService/Version
	Nov 19 23:07:51 ha-487903 crio[1051]: time="2025-11-19 23:07:51.338023339Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5a2e68ce-bbe4-4cbf-aadb-184bf12eb96c name=/runtime.v1.RuntimeService/Version
	Nov 19 23:07:51 ha-487903 crio[1051]: time="2025-11-19 23:07:51.340393476Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8dc02164-ceed-4663-b346-7ae36ad17eb6 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:07:51 ha-487903 crio[1051]: time="2025-11-19 23:07:51.341994619Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763593671341927958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8dc02164-ceed-4663-b346-7ae36ad17eb6 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:07:51 ha-487903 crio[1051]: time="2025-11-19 23:07:51.343140328Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a2fcaf7-bbfa-4a54-ba09-204b637d290d name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:07:51 ha-487903 crio[1051]: time="2025-11-19 23:07:51.343257994Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a2fcaf7-bbfa-4a54-ba09-204b637d290d name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:07:51 ha-487903 crio[1051]: time="2025-11-19 23:07:51.343571249Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6554703e81880e9f6a21acce786b79533e2a7c53bb8f205f8cf58b3e7f2602bf,PodSandboxId:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763593636083086322,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ecabad51ca157f947db36f5eba7ab52cd91201da10aac9ace5e0c4be262b48,PodSandboxId:270bc5025a2089cafec093032b0ebbddd76667a9be6dc17be0d73dcc82585702,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1763593607117114313,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7b57f96db7-vl8nf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 946ad3f6-2e30-4020-9558-891d0523c640,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4db302f8e1d90f14b4a4931bf9263114ec245c8993010f4eed63ba0b2ff9d17,PodSandboxId:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763593604195293709,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:671e74cfb90edb492f69d2062ab55e74926857b9da6b2bd43dc676ca77a34e95,PodSandboxId:21cea62c9e5ab973727fcb21a109a1a976f2836eed75c3f56119049c06989cb9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,State:CONTAINER_RUNNING,CreatedAt:1763593603580117075,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p9nqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dd7683b-c7e7-487c-904a-506a24f833d8,},Annotations:map[string]string{io.kubernetes.container.hash: 127fdb84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e1ce69b078fd9d69a966ab3ae75b0a9a20f1716235f7fd123429ac0fe73bb0b,PodSandboxId:2d9689b8c4fc51a9b8407de5c6767ecfaee3ba4e33596524328f2bcf5d4107fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763593603411005301,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fk7mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8743ca8a-c5e5-4da6-a983-a6191d2a852a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationM
essagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3b8bef3853f0cd8da58227fc3508657f73899cd5de505b5b41e40a5be41f1f,PodSandboxId:02cf6c2f51b7a2e4710eaf470b1a546e8dfca228fd615b28c173739ebc295404,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763593603688687557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zjxkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics
\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c3e00977ee5764d9f023ed9d6a8cff1ae7f4fd8b5be7f36bf0c296650100b,PodSandboxId:acd42dfb49d39f9eed1c02ddf5511235d87334eb812770e7fb44e4707605eadf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763593603469651002,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5gt2t,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bca7b-6369-4f70-b467-829bf0c07711,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407c1906949db307229dfea169cf035af7ee02aa72d4a48566dbe3b8c43189a6,PodSandboxId:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc2152
7912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763593602977408281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a3ebfa791420fd99d627281216572fc41635c59a2e823681f10e0265a5601af,PodSandboxId:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaa
ff2a321dfb6e26b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763593593634321911,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f74b446d5d8cfbd14bd87db207
415a01d3d5d28a2cb58f4b03cd16328535c63,PodSandboxId:aadb913b7f2aa359387ce202375bc4784c495957a61cd14ac26b05dc78dcf173,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce5ff7916975f8df7d464b4e56a967792442b9e79944fd74684cc749b281df38,State:CONTAINER_RUNNING,CreatedAt:1763593573545391057,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2660d05a38d7f409ed63a1278c85d94,},Annotations:map[string]string{io.kubernetes.container.hash: 7c465d42,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fead33c061a4deb0b1eb4ee9dd3e9e724
dade2871a97a7aad79bef05acbd4a07,PodSandboxId:83240b63d40d6dadf9466b9a99213868b8f23ea48c80e114d09241c058d55ba1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763593571304452021,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03498ab9918acb57128aa1e7f285fe26,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d9fc5b2567d02f5794d850ec1778030b3378c830c952d472803d0582d5904b,PodSandboxId:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763593571289637293,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restar
tCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:361486fad16d169560efd4b5ba18c9754dd222d05d290599faad7b9f7ef4862e,PodSandboxId:2ea97d68a5406f530212285ae86cbed35cbeddb5975f19b02bd01f6965702dba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763593571267392885,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85430ee602aa7edb190bbc4c6f215cf4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"cont
ainerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37548c727f81afd8dd40f5cf29964a61c4e4370cbb603376f96287d018620cb4,PodSandboxId:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaaff2a321dfb6e26b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763593571195978986,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,},Annotations:map[string]string{io.kubernetes.contain
er.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0a2fcaf7-bbfa-4a54-ba09-204b637d290d name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:07:51 ha-487903 crio[1051]: time="2025-11-19 23:07:51.403568391Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=90afc5ad-ae78-4b87-9c48-c1f5c5b77d5f name=/runtime.v1.RuntimeService/Version
	Nov 19 23:07:51 ha-487903 crio[1051]: time="2025-11-19 23:07:51.403853851Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=90afc5ad-ae78-4b87-9c48-c1f5c5b77d5f name=/runtime.v1.RuntimeService/Version
	Nov 19 23:07:51 ha-487903 crio[1051]: time="2025-11-19 23:07:51.405432416Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1869f4ce-fd0b-4bf2-92e1-fab17ce37288 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:07:51 ha-487903 crio[1051]: time="2025-11-19 23:07:51.406667242Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763593671406578023,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1869f4ce-fd0b-4bf2-92e1-fab17ce37288 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:07:51 ha-487903 crio[1051]: time="2025-11-19 23:07:51.408241499Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c08bf442-b4ae-47c0-a165-f1cea8caf244 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:07:51 ha-487903 crio[1051]: time="2025-11-19 23:07:51.408320865Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c08bf442-b4ae-47c0-a165-f1cea8caf244 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:07:51 ha-487903 crio[1051]: time="2025-11-19 23:07:51.408627122Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6554703e81880e9f6a21acce786b79533e2a7c53bb8f205f8cf58b3e7f2602bf,PodSandboxId:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763593636083086322,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ecabad51ca157f947db36f5eba7ab52cd91201da10aac9ace5e0c4be262b48,PodSandboxId:270bc5025a2089cafec093032b0ebbddd76667a9be6dc17be0d73dcc82585702,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1763593607117114313,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7b57f96db7-vl8nf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 946ad3f6-2e30-4020-9558-891d0523c640,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4db302f8e1d90f14b4a4931bf9263114ec245c8993010f4eed63ba0b2ff9d17,PodSandboxId:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763593604195293709,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:671e74cfb90edb492f69d2062ab55e74926857b9da6b2bd43dc676ca77a34e95,PodSandboxId:21cea62c9e5ab973727fcb21a109a1a976f2836eed75c3f56119049c06989cb9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,State:CONTAINER_RUNNING,CreatedAt:1763593603580117075,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p9nqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dd7683b-c7e7-487c-904a-506a24f833d8,},Annotations:map[string]string{io.kubernetes.container.hash: 127fdb84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e1ce69b078fd9d69a966ab3ae75b0a9a20f1716235f7fd123429ac0fe73bb0b,PodSandboxId:2d9689b8c4fc51a9b8407de5c6767ecfaee3ba4e33596524328f2bcf5d4107fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763593603411005301,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fk7mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8743ca8a-c5e5-4da6-a983-a6191d2a852a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationM
essagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3b8bef3853f0cd8da58227fc3508657f73899cd5de505b5b41e40a5be41f1f,PodSandboxId:02cf6c2f51b7a2e4710eaf470b1a546e8dfca228fd615b28c173739ebc295404,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763593603688687557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zjxkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics
\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c3e00977ee5764d9f023ed9d6a8cff1ae7f4fd8b5be7f36bf0c296650100b,PodSandboxId:acd42dfb49d39f9eed1c02ddf5511235d87334eb812770e7fb44e4707605eadf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763593603469651002,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5gt2t,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bca7b-6369-4f70-b467-829bf0c07711,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407c1906949db307229dfea169cf035af7ee02aa72d4a48566dbe3b8c43189a6,PodSandboxId:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc2152
7912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763593602977408281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a3ebfa791420fd99d627281216572fc41635c59a2e823681f10e0265a5601af,PodSandboxId:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaa
ff2a321dfb6e26b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763593593634321911,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f74b446d5d8cfbd14bd87db207
415a01d3d5d28a2cb58f4b03cd16328535c63,PodSandboxId:aadb913b7f2aa359387ce202375bc4784c495957a61cd14ac26b05dc78dcf173,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce5ff7916975f8df7d464b4e56a967792442b9e79944fd74684cc749b281df38,State:CONTAINER_RUNNING,CreatedAt:1763593573545391057,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2660d05a38d7f409ed63a1278c85d94,},Annotations:map[string]string{io.kubernetes.container.hash: 7c465d42,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fead33c061a4deb0b1eb4ee9dd3e9e724
dade2871a97a7aad79bef05acbd4a07,PodSandboxId:83240b63d40d6dadf9466b9a99213868b8f23ea48c80e114d09241c058d55ba1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763593571304452021,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03498ab9918acb57128aa1e7f285fe26,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d9fc5b2567d02f5794d850ec1778030b3378c830c952d472803d0582d5904b,PodSandboxId:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763593571289637293,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restar
tCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:361486fad16d169560efd4b5ba18c9754dd222d05d290599faad7b9f7ef4862e,PodSandboxId:2ea97d68a5406f530212285ae86cbed35cbeddb5975f19b02bd01f6965702dba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763593571267392885,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85430ee602aa7edb190bbc4c6f215cf4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"cont
ainerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37548c727f81afd8dd40f5cf29964a61c4e4370cbb603376f96287d018620cb4,PodSandboxId:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaaff2a321dfb6e26b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763593571195978986,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,},Annotations:map[string]string{io.kubernetes.contain
er.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c08bf442-b4ae-47c0-a165-f1cea8caf244 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6554703e81880       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      35 seconds ago       Running             storage-provisioner       4                   bcf53581b6e1f       storage-provisioner
	08ecabad51ca1       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   About a minute ago   Running             busybox                   1                   270bc5025a208       busybox-7b57f96db7-vl8nf
	f4db302f8e1d9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   bcf53581b6e1f       storage-provisioner
	cf3b8bef3853f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      About a minute ago   Running             coredns                   1                   02cf6c2f51b7a       coredns-66bc5c9577-zjxkb
	671e74cfb90ed       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      About a minute ago   Running             kindnet-cni               1                   21cea62c9e5ab       kindnet-p9nqh
	323c3e00977ee       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      About a minute ago   Running             coredns                   1                   acd42dfb49d39       coredns-66bc5c9577-5gt2t
	8e1ce69b078fd       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      About a minute ago   Running             kube-proxy                1                   2d9689b8c4fc5       kube-proxy-fk7mh
	407c1906949db       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      About a minute ago   Running             kube-controller-manager   2                   a4df466e854f6       kube-controller-manager-ha-487903
	0a3ebfa791420       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      About a minute ago   Running             kube-apiserver            2                   6d84027fd8d6f       kube-apiserver-ha-487903
	9f74b446d5d8c       ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178     About a minute ago   Running             kube-vip                  1                   aadb913b7f2aa       kube-vip-ha-487903
	fead33c061a4d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      About a minute ago   Running             kube-scheduler            1                   83240b63d40d6       kube-scheduler-ha-487903
	b7d9fc5b2567d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      About a minute ago   Exited              kube-controller-manager   1                   a4df466e854f6       kube-controller-manager-ha-487903
	361486fad16d1       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      About a minute ago   Running             etcd                      1                   2ea97d68a5406       etcd-ha-487903
	37548c727f81a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      About a minute ago   Exited              kube-apiserver            1                   6d84027fd8d6f       kube-apiserver-ha-487903
	
	
	==> coredns [323c3e00977ee5764d9f023ed9d6a8cff1ae7f4fd8b5be7f36bf0c296650100b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51076 - 6967 "HINFO IN 7389388171048239250.1605567939079731882. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.415536075s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [cf3b8bef3853f0cd8da58227fc3508657f73899cd5de505b5b41e40a5be41f1f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44386 - 47339 "HINFO IN 5025386377785033151.6368126768169479003. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.417913634s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-487903
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-487903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=ha-487903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_48_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:47:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-487903
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 23:07:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 23:07:12 +0000   Wed, 19 Nov 2025 22:47:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 23:07:12 +0000   Wed, 19 Nov 2025 22:47:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 23:07:12 +0000   Wed, 19 Nov 2025 22:47:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 23:07:12 +0000   Wed, 19 Nov 2025 22:48:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.15
	  Hostname:    ha-487903
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 a1ad91e99cee4f2a89ceda034e4410c0
	  System UUID:                a1ad91e9-9cee-4f2a-89ce-da034e4410c0
	  Boot ID:                    1b20db97-3ea3-483b-aa28-0753781928f2
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-vl8nf             0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 coredns-66bc5c9577-5gt2t             100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     19m
	  kube-system                 coredns-66bc5c9577-zjxkb             100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     19m
	  kube-system                 etcd-ha-487903                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         19m
	  kube-system                 kindnet-p9nqh                        100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      19m
	  kube-system                 kube-apiserver-ha-487903             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-ha-487903    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-fk7mh                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ha-487903             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-vip-ha-487903                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (9%)  390Mi (13%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 19m                  kube-proxy       
	  Normal   Starting                 66s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    19m (x8 over 19m)    kubelet          Node ha-487903 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 19m                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  19m (x8 over 19m)    kubelet          Node ha-487903 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     19m (x7 over 19m)    kubelet          Node ha-487903 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  19m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeAllocatableEnforced  19m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 19m                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     19m                  kubelet          Node ha-487903 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    19m                  kubelet          Node ha-487903 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  19m                  kubelet          Node ha-487903 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           19m                  node-controller  Node ha-487903 event: Registered Node ha-487903 in Controller
	  Normal   NodeReady                19m                  kubelet          Node ha-487903 status is now: NodeReady
	  Normal   RegisteredNode           18m                  node-controller  Node ha-487903 event: Registered Node ha-487903 in Controller
	  Normal   RegisteredNode           16m                  node-controller  Node ha-487903 event: Registered Node ha-487903 in Controller
	  Normal   RegisteredNode           13m                  node-controller  Node ha-487903 event: Registered Node ha-487903 in Controller
	  Normal   Starting                 101s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  101s (x8 over 101s)  kubelet          Node ha-487903 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    101s (x8 over 101s)  kubelet          Node ha-487903 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     101s (x7 over 101s)  kubelet          Node ha-487903 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  101s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 70s                  kubelet          Node ha-487903 has been rebooted, boot id: 1b20db97-3ea3-483b-aa28-0753781928f2
	  Normal   RegisteredNode           64s                  node-controller  Node ha-487903 event: Registered Node ha-487903 in Controller
	  Normal   RegisteredNode           62s                  node-controller  Node ha-487903 event: Registered Node ha-487903 in Controller
	  Normal   RegisteredNode           26s                  node-controller  Node ha-487903 event: Registered Node ha-487903 in Controller
	
	
	Name:               ha-487903-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-487903-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=ha-487903
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_19T22_49_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:49:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-487903-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 23:07:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 23:07:06 +0000   Wed, 19 Nov 2025 22:54:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 23:07:06 +0000   Wed, 19 Nov 2025 22:54:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 23:07:06 +0000   Wed, 19 Nov 2025 22:54:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 23:07:06 +0000   Wed, 19 Nov 2025 22:54:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.191
	  Hostname:    ha-487903-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 dcc51fc7a2ff40ae988dda36299d6bbc
	  System UUID:                dcc51fc7-a2ff-40ae-988d-da36299d6bbc
	  Boot ID:                    6ad68891-6365-45be-8b40-3a4d3c73c34d
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-xjvfn                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 etcd-ha-487903-m02                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         18m
	  kube-system                 kindnet-9zx8x                            100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      18m
	  kube-system                 kube-apiserver-ha-487903-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-ha-487903-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-77wjf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-ha-487903-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-vip-ha-487903-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (5%)  50Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 18m                kube-proxy       
	  Normal   Starting                 58s                kube-proxy       
	  Normal   Starting                 13m                kube-proxy       
	  Normal   RegisteredNode           18m                node-controller  Node ha-487903-m02 event: Registered Node ha-487903-m02 in Controller
	  Normal   RegisteredNode           18m                node-controller  Node ha-487903-m02 event: Registered Node ha-487903-m02 in Controller
	  Normal   RegisteredNode           16m                node-controller  Node ha-487903-m02 event: Registered Node ha-487903-m02 in Controller
	  Normal   NodeNotReady             14m                node-controller  Node ha-487903-m02 status is now: NodeNotReady
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node ha-487903-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node ha-487903-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node ha-487903-m02 status is now: NodeHasNoDiskPressure
	  Warning  Rebooted                 13m                kubelet          Node ha-487903-m02 has been rebooted, boot id: e9c055dc-1db9-46bb-aebb-1872d4771aa9
	  Normal   RegisteredNode           13m                node-controller  Node ha-487903-m02 event: Registered Node ha-487903-m02 in Controller
	  Normal   Starting                 79s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  79s (x8 over 79s)  kubelet          Node ha-487903-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    79s (x8 over 79s)  kubelet          Node ha-487903-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     79s (x7 over 79s)  kubelet          Node ha-487903-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  79s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 65s                kubelet          Node ha-487903-m02 has been rebooted, boot id: 6ad68891-6365-45be-8b40-3a4d3c73c34d
	  Normal   RegisteredNode           64s                node-controller  Node ha-487903-m02 event: Registered Node ha-487903-m02 in Controller
	  Normal   RegisteredNode           62s                node-controller  Node ha-487903-m02 event: Registered Node ha-487903-m02 in Controller
	  Normal   RegisteredNode           26s                node-controller  Node ha-487903-m02 event: Registered Node ha-487903-m02 in Controller
	
	
	Name:               ha-487903-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-487903-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=ha-487903
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_19T22_50_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:50:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-487903-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 23:07:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 23:07:40 +0000   Wed, 19 Nov 2025 23:07:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 23:07:40 +0000   Wed, 19 Nov 2025 23:07:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 23:07:40 +0000   Wed, 19 Nov 2025 23:07:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 23:07:40 +0000   Wed, 19 Nov 2025 23:07:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.160
	  Hostname:    ha-487903-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 e9ddbb3bf8b54cd48c27cb1452f23fd2
	  System UUID:                e9ddbb3b-f8b5-4cd4-8c27-cb1452f23fd2
	  Boot ID:                    ebee6c5a-099c-4845-b6bc-e5686cb73f0c
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-6q5gq                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 etcd-ha-487903-m03                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         16m
	  kube-system                 kindnet-kslhw                            100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      16m
	  kube-system                 kube-apiserver-ha-487903-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-487903-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-tkx9r                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-487903-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-487903-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (5%)  50Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 16m                kube-proxy       
	  Normal   Starting                 23s                kube-proxy       
	  Normal   RegisteredNode           16m                node-controller  Node ha-487903-m03 event: Registered Node ha-487903-m03 in Controller
	  Normal   RegisteredNode           16m                node-controller  Node ha-487903-m03 event: Registered Node ha-487903-m03 in Controller
	  Normal   RegisteredNode           16m                node-controller  Node ha-487903-m03 event: Registered Node ha-487903-m03 in Controller
	  Normal   RegisteredNode           13m                node-controller  Node ha-487903-m03 event: Registered Node ha-487903-m03 in Controller
	  Normal   NodeNotReady             12m                node-controller  Node ha-487903-m03 status is now: NodeNotReady
	  Normal   RegisteredNode           65s                node-controller  Node ha-487903-m03 event: Registered Node ha-487903-m03 in Controller
	  Normal   RegisteredNode           63s                node-controller  Node ha-487903-m03 event: Registered Node ha-487903-m03 in Controller
	  Normal   Starting                 45s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  45s (x8 over 45s)  kubelet          Node ha-487903-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node ha-487903-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     45s (x7 over 45s)  kubelet          Node ha-487903-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  45s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 33s                kubelet          Node ha-487903-m03 has been rebooted, boot id: ebee6c5a-099c-4845-b6bc-e5686cb73f0c
	  Normal   RegisteredNode           27s                node-controller  Node ha-487903-m03 event: Registered Node ha-487903-m03 in Controller
	
	
	Name:               ha-487903-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-487903-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=ha-487903
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_19T22_51_56_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:51:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-487903-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 23:07:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 23:07:42 +0000   Wed, 19 Nov 2025 23:07:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 23:07:42 +0000   Wed, 19 Nov 2025 23:07:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 23:07:42 +0000   Wed, 19 Nov 2025 23:07:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 23:07:42 +0000   Wed, 19 Nov 2025 23:07:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.187
	  Hostname:    ha-487903-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 2ce148a1b98246f6ada06a5a5b14ddce
	  System UUID:                2ce148a1-b982-46f6-ada0-6a5a5b14ddce
	  Boot ID:                    7878c528-f6af-4234-946e-b1c55c0ff956
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-s9k2l       100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      15m
	  kube-system                 kube-proxy-zxtk6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (1%)  50Mi (1%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 15m                kube-proxy       
	  Normal   NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     15m (x3 over 15m)  kubelet          Node ha-487903-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    15m (x3 over 15m)  kubelet          Node ha-487903-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  15m (x3 over 15m)  kubelet          Node ha-487903-m04 status is now: NodeHasSufficientMemory
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           15m                node-controller  Node ha-487903-m04 event: Registered Node ha-487903-m04 in Controller
	  Normal   RegisteredNode           15m                node-controller  Node ha-487903-m04 event: Registered Node ha-487903-m04 in Controller
	  Normal   RegisteredNode           15m                node-controller  Node ha-487903-m04 event: Registered Node ha-487903-m04 in Controller
	  Normal   NodeReady                15m                kubelet          Node ha-487903-m04 status is now: NodeReady
	  Normal   RegisteredNode           13m                node-controller  Node ha-487903-m04 event: Registered Node ha-487903-m04 in Controller
	  Normal   NodeNotReady             12m                node-controller  Node ha-487903-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           65s                node-controller  Node ha-487903-m04 event: Registered Node ha-487903-m04 in Controller
	  Normal   RegisteredNode           63s                node-controller  Node ha-487903-m04 event: Registered Node ha-487903-m04 in Controller
	  Normal   RegisteredNode           27s                node-controller  Node ha-487903-m04 event: Registered Node ha-487903-m04 in Controller
	  Normal   Starting                 10s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  10s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 10s                kubelet          Node ha-487903-m04 has been rebooted, boot id: 7878c528-f6af-4234-946e-b1c55c0ff956
	  Normal   NodeHasSufficientMemory  10s (x4 over 10s)  kubelet          Node ha-487903-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10s (x4 over 10s)  kubelet          Node ha-487903-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10s (x4 over 10s)  kubelet          Node ha-487903-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             10s                kubelet          Node ha-487903-m04 status is now: NodeNotReady
	  Normal   NodeReady                10s (x2 over 10s)  kubelet          Node ha-487903-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov19 23:05] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[Nov19 23:06] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000639] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.971469] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.112003] kauditd_printk_skb: 93 callbacks suppressed
	[ +23.563071] kauditd_printk_skb: 193 callbacks suppressed
	[  +9.425091] kauditd_printk_skb: 6 callbacks suppressed
	[  +3.746118] kauditd_printk_skb: 281 callbacks suppressed
	[Nov19 23:07] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [361486fad16d169560efd4b5ba18c9754dd222d05d290599faad7b9f7ef4862e] <==
	{"level":"warn","ts":"2025-11-19T23:07:06.203115Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.39.160:2380/version","remote-member-id":"12a2eca89caa6ef","error":"Get \"https://192.168.39.160:2380/version\": dial tcp 192.168.39.160:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-19T23:07:06.203177Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"12a2eca89caa6ef","error":"Get \"https://192.168.39.160:2380/version\": dial tcp 192.168.39.160:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-19T23:07:07.487952Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"12a2eca89caa6ef","rtt":"0s","error":"dial tcp 192.168.39.160:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-19T23:07:07.489213Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"12a2eca89caa6ef","rtt":"0s","error":"dial tcp 192.168.39.160:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-19T23:07:10.205469Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.39.160:2380/version","remote-member-id":"12a2eca89caa6ef","error":"Get \"https://192.168.39.160:2380/version\": dial tcp 192.168.39.160:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-19T23:07:10.205597Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"12a2eca89caa6ef","error":"Get \"https://192.168.39.160:2380/version\": dial tcp 192.168.39.160:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-19T23:07:12.489039Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"12a2eca89caa6ef","rtt":"0s","error":"dial tcp 192.168.39.160:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-19T23:07:12.490337Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"12a2eca89caa6ef","rtt":"0s","error":"dial tcp 192.168.39.160:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-19T23:07:14.207287Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.39.160:2380/version","remote-member-id":"12a2eca89caa6ef","error":"Get \"https://192.168.39.160:2380/version\": dial tcp 192.168.39.160:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-19T23:07:14.207351Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"12a2eca89caa6ef","error":"Get \"https://192.168.39.160:2380/version\": dial tcp 192.168.39.160:2380: connect: connection refused"}
	{"level":"info","ts":"2025-11-19T23:07:16.117696Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aadd773bb1fe5a6f","to":"12a2eca89caa6ef","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-11-19T23:07:16.117815Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"12a2eca89caa6ef"}
	{"level":"info","ts":"2025-11-19T23:07:16.118135Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aadd773bb1fe5a6f","remote-peer-id":"12a2eca89caa6ef"}
	{"level":"info","ts":"2025-11-19T23:07:16.119339Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aadd773bb1fe5a6f","to":"12a2eca89caa6ef","stream-type":"stream Message"}
	{"level":"info","ts":"2025-11-19T23:07:16.120059Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aadd773bb1fe5a6f","remote-peer-id":"12a2eca89caa6ef"}
	{"level":"info","ts":"2025-11-19T23:07:16.137587Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aadd773bb1fe5a6f","remote-peer-id":"12a2eca89caa6ef"}
	{"level":"info","ts":"2025-11-19T23:07:16.139573Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aadd773bb1fe5a6f","remote-peer-id":"12a2eca89caa6ef"}
	{"level":"warn","ts":"2025-11-19T23:07:17.440682Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"191.54871ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-19T23:07:17.440904Z","caller":"traceutil/trace.go:172","msg":"trace[1753473238] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2450; }","duration":"191.78252ms","start":"2025-11-19T23:07:17.249092Z","end":"2025-11-19T23:07:17.440875Z","steps":["trace[1753473238] 'agreement among raft nodes before linearized reading'  (duration: 71.313203ms)","trace[1753473238] 'range keys from in-memory index tree'  (duration: 120.155077ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T23:07:17.441401Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.503606ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8674384362276629839 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.191\" mod_revision:2420 > success:<request_put:<key:\"/registry/masterleases/192.168.39.191\" value_size:67 lease:8674384362276629837 >> failure:<>>","response":"size:16"}
	{"level":"warn","ts":"2025-11-19T23:07:48.542069Z","caller":"etcdserver/raft.go:387","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"48ac9f57fd1b7861","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"10.845349ms"}
	{"level":"warn","ts":"2025-11-19T23:07:48.542343Z","caller":"etcdserver/raft.go:387","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"12a2eca89caa6ef","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"11.124624ms"}
	{"level":"info","ts":"2025-11-19T23:07:48.543856Z","caller":"traceutil/trace.go:172","msg":"trace[1319701574] linearizableReadLoop","detail":"{readStateIndex:3180; appliedIndex:3180; }","duration":"158.564762ms","start":"2025-11-19T23:07:48.385263Z","end":"2025-11-19T23:07:48.543828Z","steps":["trace[1319701574] 'read index received'  (duration: 158.558852ms)","trace[1319701574] 'applied index is now lower than readState.Index'  (duration: 4.531ยตs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T23:07:48.545623Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"160.332943ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-19T23:07:48.545695Z","caller":"traceutil/trace.go:172","msg":"trace[1464022155] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2660; }","duration":"160.440859ms","start":"2025-11-19T23:07:48.385237Z","end":"2025-11-19T23:07:48.545678Z","steps":["trace[1464022155] 'agreement among raft nodes before linearized reading'  (duration: 158.944389ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:07:52 up 1 min,  0 users,  load average: 0.70, 0.30, 0.11
	Linux ha-487903 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 21:15:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kindnet [671e74cfb90edb492f69d2062ab55e74926857b9da6b2bd43dc676ca77a34e95] <==
	I1119 23:07:25.551339       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.160 Flags: [] Table: 0 Realm: 0} 
	I1119 23:07:25.551569       1 main.go:297] Handling node with IPs: map[192.168.39.187:{}]
	I1119 23:07:25.551605       1 main.go:324] Node ha-487903-m04 has CIDR [10.244.3.0/24] 
	I1119 23:07:25.551717       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.39.187 Flags: [] Table: 0 Realm: 0} 
	I1119 23:07:25.551968       1 main.go:297] Handling node with IPs: map[192.168.39.15:{}]
	I1119 23:07:25.551979       1 main.go:301] handling current node
	I1119 23:07:25.555969       1 main.go:297] Handling node with IPs: map[192.168.39.191:{}]
	I1119 23:07:25.555998       1 main.go:324] Node ha-487903-m02 has CIDR [10.244.1.0/24] 
	I1119 23:07:25.556125       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.191 Flags: [] Table: 0 Realm: 0} 
	I1119 23:07:35.550785       1 main.go:297] Handling node with IPs: map[192.168.39.15:{}]
	I1119 23:07:35.551328       1 main.go:301] handling current node
	I1119 23:07:35.551482       1 main.go:297] Handling node with IPs: map[192.168.39.191:{}]
	I1119 23:07:35.551569       1 main.go:324] Node ha-487903-m02 has CIDR [10.244.1.0/24] 
	I1119 23:07:35.552075       1 main.go:297] Handling node with IPs: map[192.168.39.160:{}]
	I1119 23:07:35.552137       1 main.go:324] Node ha-487903-m03 has CIDR [10.244.2.0/24] 
	I1119 23:07:35.553487       1 main.go:297] Handling node with IPs: map[192.168.39.187:{}]
	I1119 23:07:35.553607       1 main.go:324] Node ha-487903-m04 has CIDR [10.244.3.0/24] 
	I1119 23:07:45.613812       1 main.go:297] Handling node with IPs: map[192.168.39.15:{}]
	I1119 23:07:45.613849       1 main.go:301] handling current node
	I1119 23:07:45.613870       1 main.go:297] Handling node with IPs: map[192.168.39.191:{}]
	I1119 23:07:45.613875       1 main.go:324] Node ha-487903-m02 has CIDR [10.244.1.0/24] 
	I1119 23:07:45.614056       1 main.go:297] Handling node with IPs: map[192.168.39.160:{}]
	I1119 23:07:45.614062       1 main.go:324] Node ha-487903-m03 has CIDR [10.244.2.0/24] 
	I1119 23:07:45.614196       1 main.go:297] Handling node with IPs: map[192.168.39.187:{}]
	I1119 23:07:45.614205       1 main.go:324] Node ha-487903-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [0a3ebfa791420fd99d627281216572fc41635c59a2e823681f10e0265a5601af] <==
	I1119 23:06:41.578069       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1119 23:06:41.578813       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1119 23:06:41.578895       1 policy_source.go:240] refreshing policies
	I1119 23:06:41.608674       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 23:06:41.652233       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1119 23:06:41.655394       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1119 23:06:41.655821       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1119 23:06:41.655850       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1119 23:06:41.656332       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1119 23:06:41.656371       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 23:06:41.656393       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 23:06:41.661422       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 23:06:41.661498       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1119 23:06:41.661574       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 23:06:41.678861       1 cache.go:39] Caches are synced for autoregister controller
	W1119 23:06:41.766283       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.191]
	I1119 23:06:41.770787       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 23:06:41.846071       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1119 23:06:41.851314       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1119 23:06:42.378977       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 23:06:42.473024       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1119 23:06:45.193304       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.15 192.168.39.191]
	I1119 23:06:47.599355       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 23:06:47.956548       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 23:06:50.470006       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-apiserver [37548c727f81afd8dd40f5cf29964a61c4e4370cbb603376f96287d018620cb4] <==
	I1119 23:06:11.880236       1 server.go:150] Version: v1.34.1
	I1119 23:06:11.880286       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1119 23:06:12.813039       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1119 23:06:12.813073       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1119 23:06:12.813086       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1119 23:06:12.813090       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1119 23:06:12.813094       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1119 23:06:12.813097       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1119 23:06:12.813101       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1119 23:06:12.813104       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1119 23:06:12.813108       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1119 23:06:12.813111       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1119 23:06:12.813114       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1119 23:06:12.813118       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1119 23:06:12.905211       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1119 23:06:12.913843       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1119 23:06:12.920093       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1119 23:06:12.966564       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1119 23:06:12.985714       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1119 23:06:12.985841       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1119 23:06:12.986449       1 instance.go:239] Using reconciler: lease
	W1119 23:06:12.991441       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1119 23:06:32.899983       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1119 23:06:32.912361       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1119 23:06:32.990473       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [407c1906949db307229dfea169cf035af7ee02aa72d4a48566dbe3b8c43189a6] <==
	I1119 23:06:47.634000       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 23:06:47.640819       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 23:06:47.645361       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 23:06:47.647492       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1119 23:06:47.648823       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1119 23:06:47.648946       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 23:06:47.649012       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 23:06:47.649925       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 23:06:47.650061       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 23:06:47.652900       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1119 23:06:47.653973       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 23:06:47.654043       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 23:06:47.654066       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 23:06:47.655286       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 23:06:47.658251       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 23:06:47.661337       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 23:06:47.661495       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 23:06:47.665057       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 23:06:47.668198       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 23:06:47.718631       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-487903-m04"
	I1119 23:06:47.722547       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-487903"
	I1119 23:06:47.722625       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-487903-m02"
	I1119 23:06:47.722698       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-487903-m03"
	I1119 23:06:47.725022       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1119 23:07:42.933678       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-487903-m04"
	
	
	==> kube-controller-manager [b7d9fc5b2567d02f5794d850ec1778030b3378c830c952d472803d0582d5904b] <==
	I1119 23:06:13.347369       1 serving.go:386] Generated self-signed cert in-memory
	I1119 23:06:14.236064       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1119 23:06:14.236118       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 23:06:14.241243       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1119 23:06:14.241453       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1119 23:06:14.242515       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1119 23:06:14.242958       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1119 23:06:41.727088       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[-]poststarthook/bootstrap-controller failed: reason withheld\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the reques
t from succeeding"
	
	
	==> kube-proxy [8e1ce69b078fd9d69a966ab3ae75b0a9a20f1716235f7fd123429ac0fe73bb0b] <==
	I1119 23:06:45.377032       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 23:06:45.478419       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 23:06:45.478668       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.15"]
	E1119 23:06:45.478924       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 23:06:45.554663       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1119 23:06:45.554766       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1119 23:06:45.554814       1 server_linux.go:132] "Using iptables Proxier"
	I1119 23:06:45.584249       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 23:06:45.586108       1 server.go:527] "Version info" version="v1.34.1"
	I1119 23:06:45.586390       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 23:06:45.595385       1 config.go:200] "Starting service config controller"
	I1119 23:06:45.595503       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 23:06:45.595536       1 config.go:106] "Starting endpoint slice config controller"
	I1119 23:06:45.595628       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 23:06:45.595660       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 23:06:45.595795       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 23:06:45.601653       1 config.go:309] "Starting node config controller"
	I1119 23:06:45.601683       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 23:06:45.601692       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 23:06:45.697008       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 23:06:45.701060       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 23:06:45.701074       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [fead33c061a4deb0b1eb4ee9dd3e9e724dade2871a97a7aad79bef05acbd4a07] <==
	I1119 23:06:14.220668       1 serving.go:386] Generated self-signed cert in-memory
	W1119 23:06:24.867573       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.39.15:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W1119 23:06:24.867603       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1119 23:06:24.867609       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1119 23:06:41.527454       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 23:06:41.527518       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 23:06:41.550229       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 23:06:41.550314       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 23:06:41.551802       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 23:06:41.551954       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 23:06:41.651239       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 23:06:42 ha-487903 kubelet[1174]: I1119 23:06:42.450466    1174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8743ca8a-c5e5-4da6-a983-a6191d2a852a-xtables-lock\") pod \"kube-proxy-fk7mh\" (UID: \"8743ca8a-c5e5-4da6-a983-a6191d2a852a\") " pod="kube-system/kube-proxy-fk7mh"
	Nov 19 23:06:42 ha-487903 kubelet[1174]: I1119 23:06:42.451469    1174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1dd7683b-c7e7-487c-904a-506a24f833d8-cni-cfg\") pod \"kindnet-p9nqh\" (UID: \"1dd7683b-c7e7-487c-904a-506a24f833d8\") " pod="kube-system/kindnet-p9nqh"
	Nov 19 23:06:42 ha-487903 kubelet[1174]: I1119 23:06:42.451615    1174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8743ca8a-c5e5-4da6-a983-a6191d2a852a-lib-modules\") pod \"kube-proxy-fk7mh\" (UID: \"8743ca8a-c5e5-4da6-a983-a6191d2a852a\") " pod="kube-system/kube-proxy-fk7mh"
	Nov 19 23:06:42 ha-487903 kubelet[1174]: I1119 23:06:42.624851    1174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-487903" podStartSLOduration=0.624815206 podStartE2EDuration="624.815206ms" podCreationTimestamp="2025-11-19 23:06:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 23:06:42.589949403 +0000 UTC m=+32.475007265" watchObservedRunningTime="2025-11-19 23:06:42.624815206 +0000 UTC m=+32.509873072"
	Nov 19 23:06:42 ha-487903 kubelet[1174]: I1119 23:06:42.730458    1174 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-487903" podUID="1763d9e3-0be9-49f4-8f8a-a7a938a03e79"
	Nov 19 23:06:42 ha-487903 kubelet[1174]: I1119 23:06:42.737037    1174 scope.go:117] "RemoveContainer" containerID="b7d9fc5b2567d02f5794d850ec1778030b3378c830c952d472803d0582d5904b"
	Nov 19 23:06:50 ha-487903 kubelet[1174]: E1119 23:06:50.419529    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593610418393085  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:06:50 ha-487903 kubelet[1174]: E1119 23:06:50.419553    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593610418393085  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:00 ha-487903 kubelet[1174]: E1119 23:07:00.425327    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593620423807836  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:00 ha-487903 kubelet[1174]: E1119 23:07:00.425367    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593620423807836  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:10 ha-487903 kubelet[1174]: E1119 23:07:10.401322    1174 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc3ec72a905f9e317c80a67def712b35b88b16b6a1791a1918f70fbaa4461fdb\": container with ID starting with dc3ec72a905f9e317c80a67def712b35b88b16b6a1791a1918f70fbaa4461fdb not found: ID does not exist" containerID="dc3ec72a905f9e317c80a67def712b35b88b16b6a1791a1918f70fbaa4461fdb"
	Nov 19 23:07:10 ha-487903 kubelet[1174]: I1119 23:07:10.401423    1174 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="dc3ec72a905f9e317c80a67def712b35b88b16b6a1791a1918f70fbaa4461fdb" err="rpc error: code = NotFound desc = could not find container \"dc3ec72a905f9e317c80a67def712b35b88b16b6a1791a1918f70fbaa4461fdb\": container with ID starting with dc3ec72a905f9e317c80a67def712b35b88b16b6a1791a1918f70fbaa4461fdb not found: ID does not exist"
	Nov 19 23:07:10 ha-487903 kubelet[1174]: E1119 23:07:10.403444    1174 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0d0a7a11d725bfd7eb80cce55e48041fda68d8d4cb626fb175c6f11cb7b751e\": container with ID starting with e0d0a7a11d725bfd7eb80cce55e48041fda68d8d4cb626fb175c6f11cb7b751e not found: ID does not exist" containerID="e0d0a7a11d725bfd7eb80cce55e48041fda68d8d4cb626fb175c6f11cb7b751e"
	Nov 19 23:07:10 ha-487903 kubelet[1174]: I1119 23:07:10.403487    1174 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="e0d0a7a11d725bfd7eb80cce55e48041fda68d8d4cb626fb175c6f11cb7b751e" err="rpc error: code = NotFound desc = could not find container \"e0d0a7a11d725bfd7eb80cce55e48041fda68d8d4cb626fb175c6f11cb7b751e\": container with ID starting with e0d0a7a11d725bfd7eb80cce55e48041fda68d8d4cb626fb175c6f11cb7b751e not found: ID does not exist"
	Nov 19 23:07:10 ha-487903 kubelet[1174]: E1119 23:07:10.430603    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593630428472038  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:10 ha-487903 kubelet[1174]: E1119 23:07:10.430660    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593630428472038  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:16 ha-487903 kubelet[1174]: I1119 23:07:16.059285    1174 scope.go:117] "RemoveContainer" containerID="f4db302f8e1d90f14b4a4931bf9263114ec245c8993010f4eed63ba0b2ff9d17"
	Nov 19 23:07:20 ha-487903 kubelet[1174]: E1119 23:07:20.435116    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593640433962521  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:20 ha-487903 kubelet[1174]: E1119 23:07:20.435143    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593640433962521  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:30 ha-487903 kubelet[1174]: E1119 23:07:30.443024    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593650441380322  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:30 ha-487903 kubelet[1174]: E1119 23:07:30.443098    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593650441380322  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:40 ha-487903 kubelet[1174]: E1119 23:07:40.446103    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593660445234723  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:40 ha-487903 kubelet[1174]: E1119 23:07:40.446443    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593660445234723  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:50 ha-487903 kubelet[1174]: E1119 23:07:50.450547    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593670449149802  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:50 ha-487903 kubelet[1174]: E1119 23:07:50.450679    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593670449149802  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-487903 -n ha-487903
helpers_test.go:269: (dbg) Run:  kubectl --context ha-487903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (120.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (3.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-487903" in json of 'profile list' to have "Degraded" status but have "HAppy" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-487903\",\"Status\":\"HAppy\",\"Config\":{\"Name\":\"ha-487903\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"
APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-487903\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.15\",\"Port\":8443,\"Kubernete
sVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.191\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.160\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.187\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubeta
il\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"
MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-487903 -n ha-487903
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-487903 logs -n 25: (1.790488523s)
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
	โ”‚ COMMAND โ”‚                                                                ARGS                                                                 โ”‚  PROFILE  โ”‚  USER   โ”‚ VERSION โ”‚     START TIME      โ”‚      END TIME       โ”‚
	โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m03:/home/docker/cp-test.txt ha-487903-m04:/home/docker/cp-test_ha-487903-m03_ha-487903-m04.txt              โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m03 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test_ha-487903-m03_ha-487903-m04.txt                                        โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp testdata/cp-test.txt ha-487903-m04:/home/docker/cp-test.txt                                                            โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile651617511/001/cp-test_ha-487903-m04.txt โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m04:/home/docker/cp-test.txt ha-487903:/home/docker/cp-test_ha-487903-m04_ha-487903.txt                      โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903 sudo cat /home/docker/cp-test_ha-487903-m04_ha-487903.txt                                                โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m04:/home/docker/cp-test.txt ha-487903-m02:/home/docker/cp-test_ha-487903-m04_ha-487903-m02.txt              โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m02 sudo cat /home/docker/cp-test_ha-487903-m04_ha-487903-m02.txt                                        โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m04:/home/docker/cp-test.txt ha-487903-m03:/home/docker/cp-test_ha-487903-m04_ha-487903-m03.txt              โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m03 sudo cat /home/docker/cp-test_ha-487903-m04_ha-487903-m03.txt                                        โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ node    โ”‚ ha-487903 node stop m02 --alsologtostderr -v 5                                                                                      โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:53 UTC โ”‚
	โ”‚ node    โ”‚ ha-487903 node start m02 --alsologtostderr -v 5                                                                                     โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:53 UTC โ”‚ 19 Nov 25 22:54 UTC โ”‚
	โ”‚ node    โ”‚ ha-487903 node list --alsologtostderr -v 5                                                                                          โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:54 UTC โ”‚                     โ”‚
	โ”‚ stop    โ”‚ ha-487903 stop --alsologtostderr -v 5                                                                                               โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:54 UTC โ”‚ 19 Nov 25 22:58 UTC โ”‚
	โ”‚ start   โ”‚ ha-487903 start --wait true --alsologtostderr -v 5                                                                                  โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:58 UTC โ”‚                     โ”‚
	โ”‚ node    โ”‚ ha-487903 node list --alsologtostderr -v 5                                                                                          โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:05 UTC โ”‚                     โ”‚
	โ”‚ node    โ”‚ ha-487903 node delete m03 --alsologtostderr -v 5                                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:05 UTC โ”‚                     โ”‚
	โ”‚ stop    โ”‚ ha-487903 stop --alsologtostderr -v 5                                                                                               โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:05 UTC โ”‚ 19 Nov 25 23:05 UTC โ”‚
	โ”‚ start   โ”‚ ha-487903 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio                                          โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:05 UTC โ”‚ 19 Nov 25 23:07 UTC โ”‚
	โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 23:05:52
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 23:05:52.706176  142733 out.go:360] Setting OutFile to fd 1 ...
	I1119 23:05:52.706327  142733 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:05:52.706339  142733 out.go:374] Setting ErrFile to fd 2...
	I1119 23:05:52.706345  142733 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:05:52.706585  142733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-117497/.minikube/bin
	I1119 23:05:52.707065  142733 out.go:368] Setting JSON to false
	I1119 23:05:52.708054  142733 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":17300,"bootTime":1763576253,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 23:05:52.708149  142733 start.go:143] virtualization: kvm guest
	I1119 23:05:52.710481  142733 out.go:179] * [ha-487903] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 23:05:52.712209  142733 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 23:05:52.712212  142733 notify.go:221] Checking for updates...
	I1119 23:05:52.713784  142733 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 23:05:52.715651  142733 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 23:05:52.717169  142733 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-117497/.minikube
	I1119 23:05:52.718570  142733 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 23:05:52.719907  142733 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 23:05:52.721783  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:05:52.722291  142733 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 23:05:52.757619  142733 out.go:179] * Using the kvm2 driver based on existing profile
	I1119 23:05:52.759046  142733 start.go:309] selected driver: kvm2
	I1119 23:05:52.759059  142733 start.go:930] validating driver "kvm2" against &{Name:ha-487903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:fal
se default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 23:05:52.759205  142733 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 23:05:52.760143  142733 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 23:05:52.760174  142733 cni.go:84] Creating CNI manager for ""
	I1119 23:05:52.760222  142733 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1119 23:05:52.760262  142733 start.go:353] cluster config:
	{Name:ha-487903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:f
alse headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 23:05:52.760375  142733 iso.go:125] acquiring lock: {Name:mk95e5a645bfae75190ef550e02bd4f48b331040 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 23:05:52.762211  142733 out.go:179] * Starting "ha-487903" primary control-plane node in "ha-487903" cluster
	I1119 23:05:52.763538  142733 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:05:52.763567  142733 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 23:05:52.763575  142733 cache.go:65] Caching tarball of preloaded images
	I1119 23:05:52.763673  142733 preload.go:238] Found /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 23:05:52.763683  142733 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 23:05:52.763787  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:05:52.763996  142733 start.go:360] acquireMachinesLock for ha-487903: {Name:mk31ae761a53f3900bcf08a8c89eb1fc69e23fe3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1119 23:05:52.764045  142733 start.go:364] duration metric: took 30.713ยตs to acquireMachinesLock for "ha-487903"
	I1119 23:05:52.764058  142733 start.go:96] Skipping create...Using existing machine configuration
	I1119 23:05:52.764066  142733 fix.go:54] fixHost starting: 
	I1119 23:05:52.765697  142733 fix.go:112] recreateIfNeeded on ha-487903: state=Stopped err=<nil>
	W1119 23:05:52.765728  142733 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 23:05:52.767327  142733 out.go:252] * Restarting existing kvm2 VM for "ha-487903" ...
	I1119 23:05:52.767364  142733 main.go:143] libmachine: starting domain...
	I1119 23:05:52.767374  142733 main.go:143] libmachine: ensuring networks are active...
	I1119 23:05:52.768372  142733 main.go:143] libmachine: Ensuring network default is active
	I1119 23:05:52.768788  142733 main.go:143] libmachine: Ensuring network mk-ha-487903 is active
	I1119 23:05:52.769282  142733 main.go:143] libmachine: getting domain XML...
	I1119 23:05:52.770421  142733 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>ha-487903</name>
	  <uuid>a1ad91e9-9cee-4f2a-89ce-da034e4410c0</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/ha-487903.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:a9:81:53'/>
	      <source network='mk-ha-487903'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:93:d5:3e'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1119 23:05:54.042651  142733 main.go:143] libmachine: waiting for domain to start...
	I1119 23:05:54.044244  142733 main.go:143] libmachine: domain is now running
	I1119 23:05:54.044267  142733 main.go:143] libmachine: waiting for IP...
	I1119 23:05:54.045198  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:05:54.045704  142733 main.go:143] libmachine: domain ha-487903 has current primary IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:05:54.045724  142733 main.go:143] libmachine: found domain IP: 192.168.39.15
	I1119 23:05:54.045732  142733 main.go:143] libmachine: reserving static IP address...
	I1119 23:05:54.046222  142733 main.go:143] libmachine: found host DHCP lease matching {name: "ha-487903", mac: "52:54:00:a9:81:53", ip: "192.168.39.15"} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:05:54.046258  142733 main.go:143] libmachine: skip adding static IP to network mk-ha-487903 - found existing host DHCP lease matching {name: "ha-487903", mac: "52:54:00:a9:81:53", ip: "192.168.39.15"}
	I1119 23:05:54.046271  142733 main.go:143] libmachine: reserved static IP address 192.168.39.15 for domain ha-487903
	I1119 23:05:54.046295  142733 main.go:143] libmachine: waiting for SSH...
	I1119 23:05:54.046303  142733 main.go:143] libmachine: Getting to WaitForSSH function...
	I1119 23:05:54.048860  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:05:54.049341  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:05:54.049374  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:05:54.049568  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:05:54.049870  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 23:05:54.049901  142733 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1119 23:05:57.100181  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.15:22: connect: no route to host
	I1119 23:06:03.180312  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.15:22: connect: no route to host
	I1119 23:06:06.296535  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:06:06.299953  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.300441  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:06.300473  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.300784  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:06:06.301022  142733 machine.go:94] provisionDockerMachine start ...
	I1119 23:06:06.303559  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.303988  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:06.304019  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.304170  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:06.304355  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 23:06:06.304365  142733 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 23:06:06.427246  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1119 23:06:06.427299  142733 buildroot.go:166] provisioning hostname "ha-487903"
	I1119 23:06:06.430382  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.430835  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:06.430864  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.431166  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:06.431461  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 23:06:06.431480  142733 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-487903 && echo "ha-487903" | sudo tee /etc/hostname
	I1119 23:06:06.561698  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-487903
	
	I1119 23:06:06.564714  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.565207  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:06.565235  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.565469  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:06.565702  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 23:06:06.565719  142733 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-487903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-487903/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-487903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 23:06:06.681480  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:06:06.681508  142733 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21918-117497/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-117497/.minikube}
	I1119 23:06:06.681543  142733 buildroot.go:174] setting up certificates
	I1119 23:06:06.681552  142733 provision.go:84] configureAuth start
	I1119 23:06:06.685338  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.685816  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:06.685842  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.688699  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.689140  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:06.689164  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.689319  142733 provision.go:143] copyHostCerts
	I1119 23:06:06.689357  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:06:06.689414  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem, removing ...
	I1119 23:06:06.689445  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:06:06.689527  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem (1078 bytes)
	I1119 23:06:06.689624  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:06:06.689643  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem, removing ...
	I1119 23:06:06.689649  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:06:06.689677  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem (1123 bytes)
	I1119 23:06:06.689736  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:06:06.689753  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem, removing ...
	I1119 23:06:06.689759  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:06:06.689781  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem (1675 bytes)
	I1119 23:06:06.689843  142733 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem org=jenkins.ha-487903 san=[127.0.0.1 192.168.39.15 ha-487903 localhost minikube]
	I1119 23:06:07.018507  142733 provision.go:177] copyRemoteCerts
	I1119 23:06:07.018578  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 23:06:07.021615  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.022141  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:07.022166  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.022358  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:06:07.124817  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1119 23:06:07.124927  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 23:06:07.158158  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1119 23:06:07.158263  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1119 23:06:07.190088  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1119 23:06:07.190169  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 23:06:07.222689  142733 provision.go:87] duration metric: took 541.123395ms to configureAuth
	I1119 23:06:07.222718  142733 buildroot.go:189] setting minikube options for container-runtime
	I1119 23:06:07.222970  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:07.226056  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.226580  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:07.226611  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.226826  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:07.227127  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 23:06:07.227155  142733 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 23:06:07.467444  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 23:06:07.467474  142733 machine.go:97] duration metric: took 1.166437022s to provisionDockerMachine
	I1119 23:06:07.467487  142733 start.go:293] postStartSetup for "ha-487903" (driver="kvm2")
	I1119 23:06:07.467497  142733 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 23:06:07.467573  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 23:06:07.470835  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.471406  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:07.471439  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.471649  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:06:07.557470  142733 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 23:06:07.562862  142733 info.go:137] Remote host: Buildroot 2025.02
	I1119 23:06:07.562927  142733 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/addons for local assets ...
	I1119 23:06:07.563034  142733 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/files for local assets ...
	I1119 23:06:07.563138  142733 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> 1213692.pem in /etc/ssl/certs
	I1119 23:06:07.563154  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /etc/ssl/certs/1213692.pem
	I1119 23:06:07.563287  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 23:06:07.576076  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:06:07.609515  142733 start.go:296] duration metric: took 142.008328ms for postStartSetup
	I1119 23:06:07.609630  142733 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1119 23:06:07.612430  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.612824  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:07.612846  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.613026  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:06:07.696390  142733 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I1119 23:06:07.696457  142733 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1119 23:06:07.760325  142733 fix.go:56] duration metric: took 14.99624586s for fixHost
	I1119 23:06:07.763696  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.764319  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:07.764358  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.764614  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:07.764948  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 23:06:07.764966  142733 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1119 23:06:07.879861  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763593567.838594342
	
	I1119 23:06:07.879914  142733 fix.go:216] guest clock: 1763593567.838594342
	I1119 23:06:07.879939  142733 fix.go:229] Guest: 2025-11-19 23:06:07.838594342 +0000 UTC Remote: 2025-11-19 23:06:07.760362222 +0000 UTC m=+15.104606371 (delta=78.23212ms)
	I1119 23:06:07.879965  142733 fix.go:200] guest clock delta is within tolerance: 78.23212ms
	I1119 23:06:07.879974  142733 start.go:83] releasing machines lock for "ha-487903", held for 15.115918319s
	I1119 23:06:07.882904  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.883336  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:07.883370  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.883966  142733 ssh_runner.go:195] Run: cat /version.json
	I1119 23:06:07.884051  142733 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 23:06:07.887096  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.887222  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.887583  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:07.887617  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.887792  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:07.887817  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.887816  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:06:07.888042  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:06:08.000713  142733 ssh_runner.go:195] Run: systemctl --version
	I1119 23:06:08.008530  142733 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 23:06:08.160324  142733 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 23:06:08.168067  142733 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 23:06:08.168152  142733 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 23:06:08.191266  142733 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 23:06:08.191300  142733 start.go:496] detecting cgroup driver to use...
	I1119 23:06:08.191379  142733 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 23:06:08.213137  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 23:06:08.230996  142733 docker.go:218] disabling cri-docker service (if available) ...
	I1119 23:06:08.231095  142733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 23:06:08.249013  142733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 23:06:08.265981  142733 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 23:06:08.414758  142733 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 23:06:08.622121  142733 docker.go:234] disabling docker service ...
	I1119 23:06:08.622209  142733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 23:06:08.639636  142733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 23:06:08.655102  142733 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 23:06:08.816483  142733 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 23:06:08.968104  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 23:06:08.984576  142733 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 23:06:09.008691  142733 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 23:06:09.008781  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:09.022146  142733 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 23:06:09.022232  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:09.035596  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:09.049670  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:09.063126  142733 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 23:06:09.077541  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:09.091115  142733 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:09.112968  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:09.126168  142733 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 23:06:09.137702  142733 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1119 23:06:09.137765  142733 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1119 23:06:09.176751  142733 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 23:06:09.191238  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:06:09.335526  142733 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 23:06:09.473011  142733 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 23:06:09.473116  142733 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 23:06:09.479113  142733 start.go:564] Will wait 60s for crictl version
	I1119 23:06:09.479189  142733 ssh_runner.go:195] Run: which crictl
	I1119 23:06:09.483647  142733 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1119 23:06:09.528056  142733 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1119 23:06:09.528131  142733 ssh_runner.go:195] Run: crio --version
	I1119 23:06:09.559995  142733 ssh_runner.go:195] Run: crio --version
	I1119 23:06:09.592672  142733 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1119 23:06:09.597124  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:09.597564  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:09.597590  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:09.597778  142733 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1119 23:06:09.602913  142733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:06:09.620048  142733 kubeadm.go:884] updating cluster {Name:ha-487903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stor
ageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 23:06:09.620196  142733 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:06:09.620243  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:06:09.674254  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:06:09.674279  142733 crio.go:433] Images already preloaded, skipping extraction
	I1119 23:06:09.674328  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:06:09.712016  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:06:09.712041  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:06:09.712058  142733 kubeadm.go:935] updating node { 192.168.39.15 8443 v1.34.1 crio true true} ...
	I1119 23:06:09.712184  142733 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-487903 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 23:06:09.712274  142733 ssh_runner.go:195] Run: crio config
	I1119 23:06:09.768708  142733 cni.go:84] Creating CNI manager for ""
	I1119 23:06:09.768732  142733 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1119 23:06:09.768752  142733 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 23:06:09.768773  142733 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.15 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-487903 NodeName:ha-487903 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 23:06:09.768939  142733 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-487903"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.15"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.15"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 23:06:09.768965  142733 kube-vip.go:115] generating kube-vip config ...
	I1119 23:06:09.769018  142733 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1119 23:06:09.795571  142733 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1119 23:06:09.795712  142733 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1119 23:06:09.795795  142733 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 23:06:09.812915  142733 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 23:06:09.812990  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1119 23:06:09.827102  142733 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1119 23:06:09.850609  142733 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 23:06:09.873695  142733 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1119 23:06:09.898415  142733 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1119 23:06:09.921905  142733 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1119 23:06:09.927238  142733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:06:09.944650  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:06:10.092858  142733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:06:10.131346  142733 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903 for IP: 192.168.39.15
	I1119 23:06:10.131374  142733 certs.go:195] generating shared ca certs ...
	I1119 23:06:10.131396  142733 certs.go:227] acquiring lock for ca certs: {Name:mk7fd1f1adfef6333505c39c1a982465562c820e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:06:10.131585  142733 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key
	I1119 23:06:10.131628  142733 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key
	I1119 23:06:10.131638  142733 certs.go:257] generating profile certs ...
	I1119 23:06:10.131709  142733 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key
	I1119 23:06:10.131766  142733 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key.4e6b2c30
	I1119 23:06:10.131799  142733 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key
	I1119 23:06:10.131811  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1119 23:06:10.131823  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1119 23:06:10.131835  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1119 23:06:10.131844  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1119 23:06:10.131857  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1119 23:06:10.131867  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1119 23:06:10.131905  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1119 23:06:10.131923  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1119 23:06:10.131976  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem (1338 bytes)
	W1119 23:06:10.132017  142733 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369_empty.pem, impossibly tiny 0 bytes
	I1119 23:06:10.132030  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 23:06:10.132063  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem (1078 bytes)
	I1119 23:06:10.132120  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem (1123 bytes)
	I1119 23:06:10.132148  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem (1675 bytes)
	I1119 23:06:10.132194  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:06:10.132221  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:06:10.132233  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem -> /usr/share/ca-certificates/121369.pem
	I1119 23:06:10.132244  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /usr/share/ca-certificates/1213692.pem
	I1119 23:06:10.132912  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 23:06:10.173830  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 23:06:10.215892  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 23:06:10.259103  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 23:06:10.294759  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 23:06:10.334934  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 23:06:10.388220  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 23:06:10.446365  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 23:06:10.481746  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 23:06:10.514956  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem --> /usr/share/ca-certificates/121369.pem (1338 bytes)
	I1119 23:06:10.547594  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /usr/share/ca-certificates/1213692.pem (1708 bytes)
	I1119 23:06:10.595613  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 23:06:10.619484  142733 ssh_runner.go:195] Run: openssl version
	I1119 23:06:10.626921  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121369.pem && ln -fs /usr/share/ca-certificates/121369.pem /etc/ssl/certs/121369.pem"
	I1119 23:06:10.641703  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121369.pem
	I1119 23:06:10.647634  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:54 /usr/share/ca-certificates/121369.pem
	I1119 23:06:10.647703  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121369.pem
	I1119 23:06:10.655724  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/121369.pem /etc/ssl/certs/51391683.0"
	I1119 23:06:10.670575  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1213692.pem && ln -fs /usr/share/ca-certificates/1213692.pem /etc/ssl/certs/1213692.pem"
	I1119 23:06:10.684630  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1213692.pem
	I1119 23:06:10.690618  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:54 /usr/share/ca-certificates/1213692.pem
	I1119 23:06:10.690694  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1213692.pem
	I1119 23:06:10.698531  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1213692.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 23:06:10.713731  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 23:06:10.729275  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:06:10.735204  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:06:10.735297  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:06:10.744718  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 23:06:10.760092  142733 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 23:06:10.765798  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 23:06:10.773791  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 23:06:10.781675  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 23:06:10.789835  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 23:06:10.797921  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 23:06:10.806330  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 23:06:10.814663  142733 kubeadm.go:401] StartCluster: {Name:ha-487903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Clust
erName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 23:06:10.814784  142733 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 23:06:10.814836  142733 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 23:06:10.862721  142733 cri.go:89] found id: ""
	I1119 23:06:10.862820  142733 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 23:06:10.906379  142733 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 23:06:10.906398  142733 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 23:06:10.906444  142733 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 23:06:10.937932  142733 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 23:06:10.938371  142733 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-487903" does not appear in /home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 23:06:10.938511  142733 kubeconfig.go:62] /home/jenkins/minikube-integration/21918-117497/kubeconfig needs updating (will repair): [kubeconfig missing "ha-487903" cluster setting kubeconfig missing "ha-487903" context setting]
	I1119 23:06:10.938761  142733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/kubeconfig: {Name:mk079b5c589536bd895a38d6eaf0adbccf891fc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:06:10.939284  142733 kapi.go:59] client config for ha-487903: &rest.Config{Host:"https://192.168.39.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key", CAFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1119 23:06:10.939703  142733 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1119 23:06:10.939720  142733 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1119 23:06:10.939727  142733 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1119 23:06:10.939732  142733 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1119 23:06:10.939737  142733 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1119 23:06:10.939800  142733 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1119 23:06:10.940217  142733 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 23:06:10.970469  142733 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.15
	I1119 23:06:10.970501  142733 kubeadm.go:602] duration metric: took 64.095819ms to restartPrimaryControlPlane
	I1119 23:06:10.970515  142733 kubeadm.go:403] duration metric: took 155.861263ms to StartCluster
	I1119 23:06:10.970538  142733 settings.go:142] acquiring lock: {Name:mk7bf46f049c1d627501587bc2954f8687f12cc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:06:10.970645  142733 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 23:06:10.971536  142733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/kubeconfig: {Name:mk079b5c589536bd895a38d6eaf0adbccf891fc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:06:10.971861  142733 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 23:06:10.971912  142733 start.go:242] waiting for startup goroutines ...
	I1119 23:06:10.971934  142733 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 23:06:10.972157  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:10.972266  142733 cache.go:107] acquiring lock: {Name:mk7073b8eca670d2c11dece9275947688dc7c859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 23:06:10.972332  142733 cache.go:115] /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1119 23:06:10.972347  142733 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 95.206ยตs
	I1119 23:06:10.972358  142733 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1119 23:06:10.972373  142733 cache.go:87] Successfully saved all images to host disk.
	I1119 23:06:10.972588  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:10.974762  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:06:10.975000  142733 out.go:179] * Enabled addons: 
	I1119 23:06:10.976397  142733 addons.go:515] duration metric: took 4.466316ms for enable addons: enabled=[]
	I1119 23:06:10.977405  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:10.977866  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:10.977902  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:10.978075  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:06:11.174757  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:06:11.174779  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:06:11.179357  142733 cache_images.go:264] succeeded pushing to: ha-487903
	I1119 23:06:11.179394  142733 start.go:247] waiting for cluster config update ...
	I1119 23:06:11.179405  142733 start.go:256] writing updated cluster config ...
	I1119 23:06:11.181383  142733 out.go:203] 
	I1119 23:06:11.182846  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:11.182976  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:06:11.184565  142733 out.go:179] * Starting "ha-487903-m02" control-plane node in "ha-487903" cluster
	I1119 23:06:11.185697  142733 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:06:11.185715  142733 cache.go:65] Caching tarball of preloaded images
	I1119 23:06:11.185830  142733 preload.go:238] Found /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 23:06:11.185845  142733 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 23:06:11.185991  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:06:11.186234  142733 start.go:360] acquireMachinesLock for ha-487903-m02: {Name:mk31ae761a53f3900bcf08a8c89eb1fc69e23fe3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1119 23:06:11.186285  142733 start.go:364] duration metric: took 28.134ยตs to acquireMachinesLock for "ha-487903-m02"
	I1119 23:06:11.186301  142733 start.go:96] Skipping create...Using existing machine configuration
	I1119 23:06:11.186314  142733 fix.go:54] fixHost starting: m02
	I1119 23:06:11.187948  142733 fix.go:112] recreateIfNeeded on ha-487903-m02: state=Stopped err=<nil>
	W1119 23:06:11.187969  142733 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 23:06:11.189608  142733 out.go:252] * Restarting existing kvm2 VM for "ha-487903-m02" ...
	I1119 23:06:11.189647  142733 main.go:143] libmachine: starting domain...
	I1119 23:06:11.189655  142733 main.go:143] libmachine: ensuring networks are active...
	I1119 23:06:11.190534  142733 main.go:143] libmachine: Ensuring network default is active
	I1119 23:06:11.190964  142733 main.go:143] libmachine: Ensuring network mk-ha-487903 is active
	I1119 23:06:11.191485  142733 main.go:143] libmachine: getting domain XML...
	I1119 23:06:11.192659  142733 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>ha-487903-m02</name>
	  <uuid>dcc51fc7-a2ff-40ae-988d-da36299d6bbc</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/ha-487903-m02.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:04:d5:70'/>
	      <source network='mk-ha-487903'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:9b:1d:f0'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1119 23:06:12.559560  142733 main.go:143] libmachine: waiting for domain to start...
	I1119 23:06:12.561198  142733 main.go:143] libmachine: domain is now running
	I1119 23:06:12.561220  142733 main.go:143] libmachine: waiting for IP...
	I1119 23:06:12.562111  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:12.562699  142733 main.go:143] libmachine: domain ha-487903-m02 has current primary IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:12.562715  142733 main.go:143] libmachine: found domain IP: 192.168.39.191
	I1119 23:06:12.562721  142733 main.go:143] libmachine: reserving static IP address...
	I1119 23:06:12.563203  142733 main.go:143] libmachine: found host DHCP lease matching {name: "ha-487903-m02", mac: "52:54:00:04:d5:70", ip: "192.168.39.191"} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:12.563229  142733 main.go:143] libmachine: skip adding static IP to network mk-ha-487903 - found existing host DHCP lease matching {name: "ha-487903-m02", mac: "52:54:00:04:d5:70", ip: "192.168.39.191"}
	I1119 23:06:12.563240  142733 main.go:143] libmachine: reserved static IP address 192.168.39.191 for domain ha-487903-m02
	I1119 23:06:12.563244  142733 main.go:143] libmachine: waiting for SSH...
	I1119 23:06:12.563250  142733 main.go:143] libmachine: Getting to WaitForSSH function...
	I1119 23:06:12.566254  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:12.566903  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:12.566943  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:12.567198  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:12.567490  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 23:06:12.567510  142733 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1119 23:06:15.660251  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I1119 23:06:21.740210  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I1119 23:06:24.742545  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: connection refused
	I1119 23:06:27.848690  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:06:27.852119  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:27.852581  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:27.852609  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:27.852840  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:06:27.853068  142733 machine.go:94] provisionDockerMachine start ...
	I1119 23:06:27.855169  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:27.855519  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:27.855541  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:27.855673  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:27.855857  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 23:06:27.855866  142733 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 23:06:27.961777  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1119 23:06:27.961813  142733 buildroot.go:166] provisioning hostname "ha-487903-m02"
	I1119 23:06:27.964686  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:27.965144  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:27.965168  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:27.965332  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:27.965514  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 23:06:27.965525  142733 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-487903-m02 && echo "ha-487903-m02" | sudo tee /etc/hostname
	I1119 23:06:28.090321  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-487903-m02
	
	I1119 23:06:28.093353  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.093734  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:28.093771  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.093968  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:28.094236  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 23:06:28.094259  142733 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-487903-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-487903-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-487903-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 23:06:28.210348  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:06:28.210378  142733 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21918-117497/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-117497/.minikube}
	I1119 23:06:28.210394  142733 buildroot.go:174] setting up certificates
	I1119 23:06:28.210406  142733 provision.go:84] configureAuth start
	I1119 23:06:28.213280  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.213787  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:28.213819  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.216188  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.216513  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:28.216537  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.216650  142733 provision.go:143] copyHostCerts
	I1119 23:06:28.216681  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:06:28.216719  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem, removing ...
	I1119 23:06:28.216731  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:06:28.216806  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem (1078 bytes)
	I1119 23:06:28.216924  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:06:28.216954  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem, removing ...
	I1119 23:06:28.216962  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:06:28.217011  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem (1123 bytes)
	I1119 23:06:28.217078  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:06:28.217105  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem, removing ...
	I1119 23:06:28.217114  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:06:28.217151  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem (1675 bytes)
	I1119 23:06:28.217219  142733 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem org=jenkins.ha-487903-m02 san=[127.0.0.1 192.168.39.191 ha-487903-m02 localhost minikube]
	I1119 23:06:28.306411  142733 provision.go:177] copyRemoteCerts
	I1119 23:06:28.306488  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 23:06:28.309423  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.309811  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:28.309838  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.309994  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:06:28.397995  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1119 23:06:28.398093  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 23:06:28.433333  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1119 23:06:28.433422  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1119 23:06:28.465202  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1119 23:06:28.465281  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 23:06:28.497619  142733 provision.go:87] duration metric: took 287.196846ms to configureAuth
	I1119 23:06:28.497657  142733 buildroot.go:189] setting minikube options for container-runtime
	I1119 23:06:28.497961  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:28.500692  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.501143  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:28.501166  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.501348  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:28.501530  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 23:06:28.501542  142733 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 23:06:28.756160  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 23:06:28.756188  142733 machine.go:97] duration metric: took 903.106737ms to provisionDockerMachine
	I1119 23:06:28.756199  142733 start.go:293] postStartSetup for "ha-487903-m02" (driver="kvm2")
	I1119 23:06:28.756221  142733 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 23:06:28.756309  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 23:06:28.759030  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.759384  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:28.759410  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.759547  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:06:28.845331  142733 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 23:06:28.850863  142733 info.go:137] Remote host: Buildroot 2025.02
	I1119 23:06:28.850908  142733 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/addons for local assets ...
	I1119 23:06:28.850968  142733 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/files for local assets ...
	I1119 23:06:28.851044  142733 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> 1213692.pem in /etc/ssl/certs
	I1119 23:06:28.851055  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /etc/ssl/certs/1213692.pem
	I1119 23:06:28.851135  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 23:06:28.863679  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:06:28.895369  142733 start.go:296] duration metric: took 139.152116ms for postStartSetup
	I1119 23:06:28.895468  142733 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1119 23:06:28.898332  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.898765  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:28.898790  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.898999  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:06:28.985599  142733 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I1119 23:06:28.985693  142733 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1119 23:06:29.047204  142733 fix.go:56] duration metric: took 17.860883759s for fixHost
	I1119 23:06:29.050226  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:29.050744  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:29.050767  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:29.050981  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:29.051235  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 23:06:29.051247  142733 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1119 23:06:29.170064  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763593589.134247097
	
	I1119 23:06:29.170097  142733 fix.go:216] guest clock: 1763593589.134247097
	I1119 23:06:29.170109  142733 fix.go:229] Guest: 2025-11-19 23:06:29.134247097 +0000 UTC Remote: 2025-11-19 23:06:29.047235815 +0000 UTC m=+36.391479959 (delta=87.011282ms)
	I1119 23:06:29.170136  142733 fix.go:200] guest clock delta is within tolerance: 87.011282ms
	I1119 23:06:29.170145  142733 start.go:83] releasing machines lock for "ha-487903-m02", held for 17.983849826s
	I1119 23:06:29.173173  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:29.173648  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:29.173674  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:29.175909  142733 out.go:179] * Found network options:
	I1119 23:06:29.177568  142733 out.go:179]   - NO_PROXY=192.168.39.15
	W1119 23:06:29.178760  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:06:29.179292  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	I1119 23:06:29.179397  142733 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 23:06:29.179416  142733 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 23:06:29.182546  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:29.182562  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:29.183004  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:29.183038  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:29.183140  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:29.183185  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:29.183194  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:06:29.183426  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:06:29.429918  142733 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 23:06:29.437545  142733 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 23:06:29.437605  142733 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 23:06:29.459815  142733 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 23:06:29.459846  142733 start.go:496] detecting cgroup driver to use...
	I1119 23:06:29.459981  142733 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 23:06:29.484636  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 23:06:29.506049  142733 docker.go:218] disabling cri-docker service (if available) ...
	I1119 23:06:29.506131  142733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 23:06:29.529159  142733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 23:06:29.547692  142733 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 23:06:29.709216  142733 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 23:06:29.933205  142733 docker.go:234] disabling docker service ...
	I1119 23:06:29.933271  142733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 23:06:29.951748  142733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 23:06:29.967973  142733 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 23:06:30.147148  142733 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 23:06:30.300004  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 23:06:30.316471  142733 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 23:06:30.341695  142733 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 23:06:30.341768  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:30.355246  142733 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 23:06:30.355313  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:30.368901  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:30.381931  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:30.395421  142733 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 23:06:30.410190  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:30.424532  142733 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:30.447910  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:30.462079  142733 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 23:06:30.473475  142733 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1119 23:06:30.473555  142733 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1119 23:06:30.495385  142733 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 23:06:30.507744  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:06:30.650555  142733 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 23:06:30.778126  142733 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 23:06:30.778224  142733 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 23:06:30.784440  142733 start.go:564] Will wait 60s for crictl version
	I1119 23:06:30.784509  142733 ssh_runner.go:195] Run: which crictl
	I1119 23:06:30.789036  142733 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1119 23:06:30.834259  142733 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1119 23:06:30.834368  142733 ssh_runner.go:195] Run: crio --version
	I1119 23:06:30.866387  142733 ssh_runner.go:195] Run: crio --version
	I1119 23:06:30.901524  142733 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1119 23:06:30.902829  142733 out.go:179]   - env NO_PROXY=192.168.39.15
	I1119 23:06:30.906521  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:30.906929  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:30.906948  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:30.907113  142733 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1119 23:06:30.912354  142733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:06:30.929641  142733 mustload.go:66] Loading cluster: ha-487903
	I1119 23:06:30.929929  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:30.931609  142733 host.go:66] Checking if "ha-487903" exists ...
	I1119 23:06:30.931865  142733 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903 for IP: 192.168.39.191
	I1119 23:06:30.931896  142733 certs.go:195] generating shared ca certs ...
	I1119 23:06:30.931917  142733 certs.go:227] acquiring lock for ca certs: {Name:mk7fd1f1adfef6333505c39c1a982465562c820e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:06:30.932057  142733 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key
	I1119 23:06:30.932118  142733 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key
	I1119 23:06:30.932128  142733 certs.go:257] generating profile certs ...
	I1119 23:06:30.932195  142733 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key
	I1119 23:06:30.932244  142733 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key.4e640f1f
	I1119 23:06:30.932279  142733 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key
	I1119 23:06:30.932291  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1119 23:06:30.932302  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1119 23:06:30.932313  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1119 23:06:30.932326  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1119 23:06:30.932335  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1119 23:06:30.932348  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1119 23:06:30.932360  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1119 23:06:30.932370  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1119 23:06:30.932416  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem (1338 bytes)
	W1119 23:06:30.932442  142733 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369_empty.pem, impossibly tiny 0 bytes
	I1119 23:06:30.932451  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 23:06:30.932473  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem (1078 bytes)
	I1119 23:06:30.932493  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem (1123 bytes)
	I1119 23:06:30.932514  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem (1675 bytes)
	I1119 23:06:30.932559  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:06:30.932585  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /usr/share/ca-certificates/1213692.pem
	I1119 23:06:30.932599  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:06:30.932609  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem -> /usr/share/ca-certificates/121369.pem
	I1119 23:06:30.934682  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:30.935112  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:30.935137  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:30.935281  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:06:31.009328  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1119 23:06:31.016386  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1119 23:06:31.030245  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1119 23:06:31.035820  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1119 23:06:31.049236  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1119 23:06:31.054346  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1119 23:06:31.067895  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1119 23:06:31.073323  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1119 23:06:31.087209  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1119 23:06:31.092290  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1119 23:06:31.105480  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1119 23:06:31.110774  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1119 23:06:31.124311  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 23:06:31.157146  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 23:06:31.188112  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 23:06:31.219707  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 23:06:31.252776  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 23:06:31.288520  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 23:06:31.324027  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 23:06:31.356576  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 23:06:31.388386  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /usr/share/ca-certificates/1213692.pem (1708 bytes)
	I1119 23:06:31.418690  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 23:06:31.450428  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem --> /usr/share/ca-certificates/121369.pem (1338 bytes)
	I1119 23:06:31.480971  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1119 23:06:31.502673  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1119 23:06:31.525149  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1119 23:06:31.547365  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1119 23:06:31.569864  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1119 23:06:31.592406  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1119 23:06:31.614323  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1119 23:06:31.638212  142733 ssh_runner.go:195] Run: openssl version
	I1119 23:06:31.645456  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 23:06:31.659620  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:06:31.665114  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:06:31.665178  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:06:31.672451  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 23:06:31.686443  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121369.pem && ln -fs /usr/share/ca-certificates/121369.pem /etc/ssl/certs/121369.pem"
	I1119 23:06:31.700888  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121369.pem
	I1119 23:06:31.706357  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:54 /usr/share/ca-certificates/121369.pem
	I1119 23:06:31.706409  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121369.pem
	I1119 23:06:31.713959  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/121369.pem /etc/ssl/certs/51391683.0"
	I1119 23:06:31.727492  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1213692.pem && ln -fs /usr/share/ca-certificates/1213692.pem /etc/ssl/certs/1213692.pem"
	I1119 23:06:31.741862  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1213692.pem
	I1119 23:06:31.747549  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:54 /usr/share/ca-certificates/1213692.pem
	I1119 23:06:31.747622  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1213692.pem
	I1119 23:06:31.755354  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1213692.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 23:06:31.769594  142733 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 23:06:31.775132  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 23:06:31.783159  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 23:06:31.790685  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 23:06:31.798517  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 23:06:31.806212  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 23:06:31.814046  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 23:06:31.822145  142733 kubeadm.go:935] updating node {m02 192.168.39.191 8443 v1.34.1 crio true true} ...
	I1119 23:06:31.822259  142733 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-487903-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.191
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 23:06:31.822290  142733 kube-vip.go:115] generating kube-vip config ...
	I1119 23:06:31.822339  142733 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1119 23:06:31.849048  142733 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1119 23:06:31.849130  142733 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1119 23:06:31.849212  142733 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 23:06:31.862438  142733 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 23:06:31.862506  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1119 23:06:31.874865  142733 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1119 23:06:31.897430  142733 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 23:06:31.918586  142733 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1119 23:06:31.939534  142733 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1119 23:06:31.943930  142733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:06:31.958780  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:06:32.100156  142733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:06:32.133415  142733 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 23:06:32.133754  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:32.133847  142733 cache.go:107] acquiring lock: {Name:mk7073b8eca670d2c11dece9275947688dc7c859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 23:06:32.133936  142733 cache.go:115] /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1119 23:06:32.133949  142733 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 113.063ยตs
	I1119 23:06:32.133960  142733 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1119 23:06:32.133970  142733 cache.go:87] Successfully saved all images to host disk.
	I1119 23:06:32.134176  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:32.135284  142733 out.go:179] * Verifying Kubernetes components...
	I1119 23:06:32.136324  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:06:32.136777  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:06:32.139351  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:32.139927  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:32.139963  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:32.140169  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:06:32.321166  142733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:06:32.321693  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:06:32.321714  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:06:32.323895  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:06:32.326607  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:32.327119  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:32.327146  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:32.327377  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:06:32.352387  142733 kapi.go:59] client config for ha-487903: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key", CAFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1119 23:06:32.352506  142733 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.15:8443
	I1119 23:06:32.352953  142733 node_ready.go:35] waiting up to 6m0s for node "ha-487903-m02" to be "Ready" ...
	I1119 23:06:32.500722  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:06:32.500745  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:06:32.503448  142733 cache_images.go:264] succeeded pushing to: ha-487903 ha-487903-m02
	I1119 23:06:34.010161  142733 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02"
	I1119 23:06:41.592816  142733 node_ready.go:49] node "ha-487903-m02" is "Ready"
	I1119 23:06:41.592846  142733 node_ready.go:38] duration metric: took 9.239866557s for node "ha-487903-m02" to be "Ready" ...
	I1119 23:06:41.592864  142733 api_server.go:52] waiting for apiserver process to appear ...
	I1119 23:06:41.592953  142733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 23:06:42.093838  142733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 23:06:42.118500  142733 api_server.go:72] duration metric: took 9.985021825s to wait for apiserver process to appear ...
	I1119 23:06:42.118528  142733 api_server.go:88] waiting for apiserver healthz status ...
	I1119 23:06:42.118547  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:06:42.123892  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:06:42.123926  142733 api_server.go:103] status: https://192.168.39.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:06:42.619715  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:06:42.637068  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:06:42.637097  142733 api_server.go:103] status: https://192.168.39.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:06:43.118897  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:06:43.133996  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:06:43.134034  142733 api_server.go:103] status: https://192.168.39.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:06:43.618675  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:06:43.661252  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:06:43.661293  142733 api_server.go:103] status: https://192.168.39.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:06:44.118914  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:06:44.149362  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:06:44.149396  142733 api_server.go:103] status: https://192.168.39.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:06:44.618983  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:06:44.670809  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:06:44.670848  142733 api_server.go:103] status: https://192.168.39.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:06:45.119579  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:06:45.130478  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:06:45.130510  142733 api_server.go:103] status: https://192.168.39.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:06:45.619260  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:06:45.628758  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 200:
	ok
	I1119 23:06:45.631891  142733 api_server.go:141] control plane version: v1.34.1
	I1119 23:06:45.631928  142733 api_server.go:131] duration metric: took 3.513391545s to wait for apiserver health ...
	I1119 23:06:45.631939  142733 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 23:06:45.660854  142733 system_pods.go:59] 26 kube-system pods found
	I1119 23:06:45.660934  142733 system_pods.go:61] "coredns-66bc5c9577-5gt2t" [4a5bca7b-6369-4f70-b467-829bf0c07711] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:06:45.660946  142733 system_pods.go:61] "coredns-66bc5c9577-zjxkb" [9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:06:45.660955  142733 system_pods.go:61] "etcd-ha-487903" [c5b86825-5332-4f13-9566-5427deb2b911] Running
	I1119 23:06:45.660965  142733 system_pods.go:61] "etcd-ha-487903-m02" [a83259c3-4655-44c3-99ac-0f3a1c270ce1] Running
	I1119 23:06:45.660971  142733 system_pods.go:61] "etcd-ha-487903-m03" [1569e0f7-59af-4ee0-8f74-bcc25085147d] Running
	I1119 23:06:45.660978  142733 system_pods.go:61] "kindnet-9zx8x" [afdae080-5eff-4da4-8aff-726047c47eee] Running
	I1119 23:06:45.660983  142733 system_pods.go:61] "kindnet-kslhw" [4b82f473-b098-45a9-adf5-8c239353c3cd] Running
	I1119 23:06:45.660988  142733 system_pods.go:61] "kindnet-p9nqh" [1dd7683b-c7e7-487c-904a-506a24f833d8] Running
	I1119 23:06:45.660995  142733 system_pods.go:61] "kindnet-s9k2l" [98a511b0-fa67-4874-8013-e8a4a0adf5f1] Running
	I1119 23:06:45.661002  142733 system_pods.go:61] "kube-apiserver-ha-487903" [ab2ffc5e-d01f-4f0d-840e-f029adf3e0c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 23:06:45.661009  142733 system_pods.go:61] "kube-apiserver-ha-487903-m02" [cf63e467-a8e4-4ef3-910a-e35e6e75afdc] Running
	I1119 23:06:45.661014  142733 system_pods.go:61] "kube-apiserver-ha-487903-m03" [a121515e-29eb-42b5-b153-f7a0046ac5c2] Running
	I1119 23:06:45.661025  142733 system_pods.go:61] "kube-controller-manager-ha-487903" [8c9a78c7-ca0e-42cd-90e1-c4c6496de2d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 23:06:45.661033  142733 system_pods.go:61] "kube-controller-manager-ha-487903-m02" [bb173b4b-c202-408f-941e-f0af81d49115] Running
	I1119 23:06:45.661038  142733 system_pods.go:61] "kube-controller-manager-ha-487903-m03" [065ea88f-3d45-429d-8e50-ca0aa1a73bcf] Running
	I1119 23:06:45.661043  142733 system_pods.go:61] "kube-proxy-77wjf" [cff3e933-3100-44ce-9215-9db308c328d9] Running
	I1119 23:06:45.661047  142733 system_pods.go:61] "kube-proxy-fk7mh" [8743ca8a-c5e5-4da6-a983-a6191d2a852a] Running
	I1119 23:06:45.661051  142733 system_pods.go:61] "kube-proxy-tkx9r" [93e33992-972a-4c46-8ecb-143a0746d256] Running
	I1119 23:06:45.661062  142733 system_pods.go:61] "kube-proxy-zxtk6" [a2e2aa8e-2dc5-4233-a89a-da39ab61fd72] Running
	I1119 23:06:45.661066  142733 system_pods.go:61] "kube-scheduler-ha-487903" [9adc6023-3fda-4823-9f02-f9ec96db2287] Running
	I1119 23:06:45.661071  142733 system_pods.go:61] "kube-scheduler-ha-487903-m02" [45efe4be-fc82-4bc3-9c0a-a731153d7a72] Running
	I1119 23:06:45.661075  142733 system_pods.go:61] "kube-scheduler-ha-487903-m03" [bdb98728-6527-4ad5-9a13-3b7d2aba1643] Running
	I1119 23:06:45.661080  142733 system_pods.go:61] "kube-vip-ha-487903" [38307c4c-3c7f-4880-a86f-c8066b3da90a] Running
	I1119 23:06:45.661084  142733 system_pods.go:61] "kube-vip-ha-487903-m02" [a4dc6a51-edac-4815-8723-d8fb6f7b6935] Running
	I1119 23:06:45.661091  142733 system_pods.go:61] "kube-vip-ha-487903-m03" [45a6021a-3c71-47ce-9553-d75c327c14f3] Running
	I1119 23:06:45.661095  142733 system_pods.go:61] "storage-provisioner" [2f8e8800-093c-4c68-ac9c-300049543509] Running
	I1119 23:06:45.661103  142733 system_pods.go:74] duration metric: took 29.156984ms to wait for pod list to return data ...
	I1119 23:06:45.661123  142733 default_sa.go:34] waiting for default service account to be created ...
	I1119 23:06:45.681470  142733 default_sa.go:45] found service account: "default"
	I1119 23:06:45.681503  142733 default_sa.go:55] duration metric: took 20.368831ms for default service account to be created ...
	I1119 23:06:45.681516  142733 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 23:06:45.756049  142733 system_pods.go:86] 26 kube-system pods found
	I1119 23:06:45.756097  142733 system_pods.go:89] "coredns-66bc5c9577-5gt2t" [4a5bca7b-6369-4f70-b467-829bf0c07711] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:06:45.756115  142733 system_pods.go:89] "coredns-66bc5c9577-zjxkb" [9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:06:45.756124  142733 system_pods.go:89] "etcd-ha-487903" [c5b86825-5332-4f13-9566-5427deb2b911] Running
	I1119 23:06:45.756130  142733 system_pods.go:89] "etcd-ha-487903-m02" [a83259c3-4655-44c3-99ac-0f3a1c270ce1] Running
	I1119 23:06:45.756141  142733 system_pods.go:89] "etcd-ha-487903-m03" [1569e0f7-59af-4ee0-8f74-bcc25085147d] Running
	I1119 23:06:45.756153  142733 system_pods.go:89] "kindnet-9zx8x" [afdae080-5eff-4da4-8aff-726047c47eee] Running
	I1119 23:06:45.756158  142733 system_pods.go:89] "kindnet-kslhw" [4b82f473-b098-45a9-adf5-8c239353c3cd] Running
	I1119 23:06:45.756163  142733 system_pods.go:89] "kindnet-p9nqh" [1dd7683b-c7e7-487c-904a-506a24f833d8] Running
	I1119 23:06:45.756168  142733 system_pods.go:89] "kindnet-s9k2l" [98a511b0-fa67-4874-8013-e8a4a0adf5f1] Running
	I1119 23:06:45.756180  142733 system_pods.go:89] "kube-apiserver-ha-487903" [ab2ffc5e-d01f-4f0d-840e-f029adf3e0c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 23:06:45.756187  142733 system_pods.go:89] "kube-apiserver-ha-487903-m02" [cf63e467-a8e4-4ef3-910a-e35e6e75afdc] Running
	I1119 23:06:45.756193  142733 system_pods.go:89] "kube-apiserver-ha-487903-m03" [a121515e-29eb-42b5-b153-f7a0046ac5c2] Running
	I1119 23:06:45.756214  142733 system_pods.go:89] "kube-controller-manager-ha-487903" [8c9a78c7-ca0e-42cd-90e1-c4c6496de2d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 23:06:45.756220  142733 system_pods.go:89] "kube-controller-manager-ha-487903-m02" [bb173b4b-c202-408f-941e-f0af81d49115] Running
	I1119 23:06:45.756227  142733 system_pods.go:89] "kube-controller-manager-ha-487903-m03" [065ea88f-3d45-429d-8e50-ca0aa1a73bcf] Running
	I1119 23:06:45.756232  142733 system_pods.go:89] "kube-proxy-77wjf" [cff3e933-3100-44ce-9215-9db308c328d9] Running
	I1119 23:06:45.756242  142733 system_pods.go:89] "kube-proxy-fk7mh" [8743ca8a-c5e5-4da6-a983-a6191d2a852a] Running
	I1119 23:06:45.756248  142733 system_pods.go:89] "kube-proxy-tkx9r" [93e33992-972a-4c46-8ecb-143a0746d256] Running
	I1119 23:06:45.756253  142733 system_pods.go:89] "kube-proxy-zxtk6" [a2e2aa8e-2dc5-4233-a89a-da39ab61fd72] Running
	I1119 23:06:45.756258  142733 system_pods.go:89] "kube-scheduler-ha-487903" [9adc6023-3fda-4823-9f02-f9ec96db2287] Running
	I1119 23:06:45.756267  142733 system_pods.go:89] "kube-scheduler-ha-487903-m02" [45efe4be-fc82-4bc3-9c0a-a731153d7a72] Running
	I1119 23:06:45.756276  142733 system_pods.go:89] "kube-scheduler-ha-487903-m03" [bdb98728-6527-4ad5-9a13-3b7d2aba1643] Running
	I1119 23:06:45.756281  142733 system_pods.go:89] "kube-vip-ha-487903" [38307c4c-3c7f-4880-a86f-c8066b3da90a] Running
	I1119 23:06:45.756286  142733 system_pods.go:89] "kube-vip-ha-487903-m02" [a4dc6a51-edac-4815-8723-d8fb6f7b6935] Running
	I1119 23:06:45.756290  142733 system_pods.go:89] "kube-vip-ha-487903-m03" [45a6021a-3c71-47ce-9553-d75c327c14f3] Running
	I1119 23:06:45.756299  142733 system_pods.go:89] "storage-provisioner" [2f8e8800-093c-4c68-ac9c-300049543509] Running
	I1119 23:06:45.756310  142733 system_pods.go:126] duration metric: took 74.786009ms to wait for k8s-apps to be running ...
	I1119 23:06:45.756320  142733 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 23:06:45.756377  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 23:06:45.804032  142733 system_svc.go:56] duration metric: took 47.697905ms WaitForService to wait for kubelet
	I1119 23:06:45.804075  142733 kubeadm.go:587] duration metric: took 13.670605736s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 23:06:45.804108  142733 node_conditions.go:102] verifying NodePressure condition ...
	I1119 23:06:45.809115  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:06:45.809156  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:06:45.809181  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:06:45.809187  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:06:45.809193  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:06:45.809200  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:06:45.809208  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:06:45.809216  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:06:45.809222  142733 node_conditions.go:105] duration metric: took 5.108401ms to run NodePressure ...
	I1119 23:06:45.809243  142733 start.go:242] waiting for startup goroutines ...
	I1119 23:06:45.809289  142733 start.go:256] writing updated cluster config ...
	I1119 23:06:45.811415  142733 out.go:203] 
	I1119 23:06:45.813102  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:45.813254  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:06:45.814787  142733 out.go:179] * Starting "ha-487903-m03" control-plane node in "ha-487903" cluster
	I1119 23:06:45.815937  142733 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:06:45.815964  142733 cache.go:65] Caching tarball of preloaded images
	I1119 23:06:45.816100  142733 preload.go:238] Found /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 23:06:45.816115  142733 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 23:06:45.816268  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:06:45.816543  142733 start.go:360] acquireMachinesLock for ha-487903-m03: {Name:mk31ae761a53f3900bcf08a8c89eb1fc69e23fe3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1119 23:06:45.816612  142733 start.go:364] duration metric: took 39.245ยตs to acquireMachinesLock for "ha-487903-m03"
	I1119 23:06:45.816630  142733 start.go:96] Skipping create...Using existing machine configuration
	I1119 23:06:45.816642  142733 fix.go:54] fixHost starting: m03
	I1119 23:06:45.818510  142733 fix.go:112] recreateIfNeeded on ha-487903-m03: state=Stopped err=<nil>
	W1119 23:06:45.818540  142733 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 23:06:45.819904  142733 out.go:252] * Restarting existing kvm2 VM for "ha-487903-m03" ...
	I1119 23:06:45.819950  142733 main.go:143] libmachine: starting domain...
	I1119 23:06:45.819961  142733 main.go:143] libmachine: ensuring networks are active...
	I1119 23:06:45.820828  142733 main.go:143] libmachine: Ensuring network default is active
	I1119 23:06:45.821278  142733 main.go:143] libmachine: Ensuring network mk-ha-487903 is active
	I1119 23:06:45.821805  142733 main.go:143] libmachine: getting domain XML...
	I1119 23:06:45.823105  142733 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>ha-487903-m03</name>
	  <uuid>e9ddbb3b-f8b5-4cd4-8c27-cb1452f23fd2</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/ha-487903-m03.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:b3:68:3d'/>
	      <source network='mk-ha-487903'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:7a:90:da'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1119 23:06:47.444391  142733 main.go:143] libmachine: waiting for domain to start...
	I1119 23:06:47.445887  142733 main.go:143] libmachine: domain is now running
	I1119 23:06:47.445908  142733 main.go:143] libmachine: waiting for IP...
	I1119 23:06:47.446706  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:06:47.447357  142733 main.go:143] libmachine: domain ha-487903-m03 has current primary IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:06:47.447380  142733 main.go:143] libmachine: found domain IP: 192.168.39.160
	I1119 23:06:47.447388  142733 main.go:143] libmachine: reserving static IP address...
	I1119 23:06:47.447950  142733 main.go:143] libmachine: found host DHCP lease matching {name: "ha-487903-m03", mac: "52:54:00:b3:68:3d", ip: "192.168.39.160"} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:50:12 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:06:47.447985  142733 main.go:143] libmachine: skip adding static IP to network mk-ha-487903 - found existing host DHCP lease matching {name: "ha-487903-m03", mac: "52:54:00:b3:68:3d", ip: "192.168.39.160"}
	I1119 23:06:47.447998  142733 main.go:143] libmachine: reserved static IP address 192.168.39.160 for domain ha-487903-m03
	I1119 23:06:47.448003  142733 main.go:143] libmachine: waiting for SSH...
	I1119 23:06:47.448010  142733 main.go:143] libmachine: Getting to WaitForSSH function...
	I1119 23:06:47.450788  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:06:47.451222  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:50:12 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:06:47.451253  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:06:47.451441  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:47.451661  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1119 23:06:47.451673  142733 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1119 23:06:50.540171  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.160:22: connect: no route to host
	I1119 23:06:56.620202  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.160:22: connect: no route to host
	I1119 23:06:59.621964  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.160:22: connect: connection refused
	I1119 23:07:02.732773  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:07:02.736628  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:02.737046  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:02.737076  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:02.737371  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:07:02.737615  142733 machine.go:94] provisionDockerMachine start ...
	I1119 23:07:02.740024  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:02.740530  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:02.740555  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:02.740752  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:02.741040  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1119 23:07:02.741054  142733 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 23:07:02.852322  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1119 23:07:02.852355  142733 buildroot.go:166] provisioning hostname "ha-487903-m03"
	I1119 23:07:02.855519  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:02.856083  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:02.856112  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:02.856309  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:02.856556  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1119 23:07:02.856572  142733 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-487903-m03 && echo "ha-487903-m03" | sudo tee /etc/hostname
	I1119 23:07:02.990322  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-487903-m03
	
	I1119 23:07:02.993714  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:02.994202  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:02.994233  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:02.994405  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:02.994627  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1119 23:07:02.994651  142733 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-487903-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-487903-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-487903-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 23:07:03.118189  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:07:03.118221  142733 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21918-117497/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-117497/.minikube}
	I1119 23:07:03.118237  142733 buildroot.go:174] setting up certificates
	I1119 23:07:03.118248  142733 provision.go:84] configureAuth start
	I1119 23:07:03.121128  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.121630  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:03.121656  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.124221  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.124569  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:03.124592  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.124715  142733 provision.go:143] copyHostCerts
	I1119 23:07:03.124748  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:07:03.124787  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem, removing ...
	I1119 23:07:03.124797  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:07:03.124892  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem (1675 bytes)
	I1119 23:07:03.125005  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:07:03.125037  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem, removing ...
	I1119 23:07:03.125047  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:07:03.125090  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem (1078 bytes)
	I1119 23:07:03.125160  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:07:03.125188  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem, removing ...
	I1119 23:07:03.125198  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:07:03.125238  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem (1123 bytes)
	I1119 23:07:03.125306  142733 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem org=jenkins.ha-487903-m03 san=[127.0.0.1 192.168.39.160 ha-487903-m03 localhost minikube]
	I1119 23:07:03.484960  142733 provision.go:177] copyRemoteCerts
	I1119 23:07:03.485022  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 23:07:03.487560  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.488008  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:03.488032  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.488178  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/id_rsa Username:docker}
	I1119 23:07:03.574034  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1119 23:07:03.574117  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 23:07:03.604129  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1119 23:07:03.604216  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1119 23:07:03.635162  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1119 23:07:03.635235  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 23:07:03.668358  142733 provision.go:87] duration metric: took 550.091154ms to configureAuth
	I1119 23:07:03.668387  142733 buildroot.go:189] setting minikube options for container-runtime
	I1119 23:07:03.668643  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:03.671745  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.672214  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:03.672242  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.672395  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:03.672584  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1119 23:07:03.672599  142733 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 23:07:03.950762  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 23:07:03.950792  142733 machine.go:97] duration metric: took 1.213162195s to provisionDockerMachine
	I1119 23:07:03.950807  142733 start.go:293] postStartSetup for "ha-487903-m03" (driver="kvm2")
	I1119 23:07:03.950821  142733 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 23:07:03.950908  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 23:07:03.954010  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.954449  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:03.954472  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.954609  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/id_rsa Username:docker}
	I1119 23:07:04.043080  142733 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 23:07:04.048534  142733 info.go:137] Remote host: Buildroot 2025.02
	I1119 23:07:04.048567  142733 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/addons for local assets ...
	I1119 23:07:04.048645  142733 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/files for local assets ...
	I1119 23:07:04.048729  142733 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> 1213692.pem in /etc/ssl/certs
	I1119 23:07:04.048741  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /etc/ssl/certs/1213692.pem
	I1119 23:07:04.048850  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 23:07:04.062005  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:07:04.095206  142733 start.go:296] duration metric: took 144.382125ms for postStartSetup
	I1119 23:07:04.095293  142733 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1119 23:07:04.097927  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.098314  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:04.098337  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.098469  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/id_rsa Username:docker}
	I1119 23:07:04.187620  142733 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I1119 23:07:04.187695  142733 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1119 23:07:04.250288  142733 fix.go:56] duration metric: took 18.433638518s for fixHost
	I1119 23:07:04.253813  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.254395  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:04.254423  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.254650  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:04.254923  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1119 23:07:04.254938  142733 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1119 23:07:04.407951  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763593624.369608325
	
	I1119 23:07:04.407981  142733 fix.go:216] guest clock: 1763593624.369608325
	I1119 23:07:04.407992  142733 fix.go:229] Guest: 2025-11-19 23:07:04.369608325 +0000 UTC Remote: 2025-11-19 23:07:04.250316644 +0000 UTC m=+71.594560791 (delta=119.291681ms)
	I1119 23:07:04.408018  142733 fix.go:200] guest clock delta is within tolerance: 119.291681ms
	I1119 23:07:04.408026  142733 start.go:83] releasing machines lock for "ha-487903-m03", held for 18.591403498s
	I1119 23:07:04.411093  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.411490  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:04.411518  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.413431  142733 out.go:179] * Found network options:
	I1119 23:07:04.414774  142733 out.go:179]   - NO_PROXY=192.168.39.15,192.168.39.191
	W1119 23:07:04.415854  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:07:04.415891  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:07:04.416317  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:07:04.416348  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	I1119 23:07:04.416422  142733 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 23:07:04.416436  142733 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 23:07:04.419695  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.419745  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.420204  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:04.420228  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.420310  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:04.420352  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.420397  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/id_rsa Username:docker}
	I1119 23:07:04.420643  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/id_rsa Username:docker}
	I1119 23:07:04.657635  142733 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 23:07:04.665293  142733 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 23:07:04.665372  142733 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 23:07:04.689208  142733 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 23:07:04.689244  142733 start.go:496] detecting cgroup driver to use...
	I1119 23:07:04.689352  142733 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 23:07:04.714215  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 23:07:04.733166  142733 docker.go:218] disabling cri-docker service (if available) ...
	I1119 23:07:04.733238  142733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 23:07:04.756370  142733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 23:07:04.778280  142733 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 23:07:04.943140  142733 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 23:07:05.174139  142733 docker.go:234] disabling docker service ...
	I1119 23:07:05.174230  142733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 23:07:05.192652  142733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 23:07:05.219388  142733 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 23:07:05.383745  142733 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 23:07:05.538084  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 23:07:05.555554  142733 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 23:07:05.579503  142733 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 23:07:05.579567  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:05.593464  142733 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 23:07:05.593530  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:05.609133  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:05.624066  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:05.637817  142733 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 23:07:05.653008  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:05.666833  142733 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:05.691556  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:05.705398  142733 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 23:07:05.717404  142733 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1119 23:07:05.717480  142733 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1119 23:07:05.740569  142733 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 23:07:05.753510  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:07:05.907119  142733 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 23:07:06.048396  142733 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 23:07:06.048486  142733 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 23:07:06.055638  142733 start.go:564] Will wait 60s for crictl version
	I1119 23:07:06.055719  142733 ssh_runner.go:195] Run: which crictl
	I1119 23:07:06.061562  142733 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1119 23:07:06.110271  142733 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1119 23:07:06.110342  142733 ssh_runner.go:195] Run: crio --version
	I1119 23:07:06.146231  142733 ssh_runner.go:195] Run: crio --version
	I1119 23:07:06.178326  142733 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1119 23:07:06.179543  142733 out.go:179]   - env NO_PROXY=192.168.39.15
	I1119 23:07:06.180760  142733 out.go:179]   - env NO_PROXY=192.168.39.15,192.168.39.191
	I1119 23:07:06.184561  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:06.184934  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:06.184957  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:06.185144  142733 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1119 23:07:06.190902  142733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:07:06.207584  142733 mustload.go:66] Loading cluster: ha-487903
	I1119 23:07:06.207839  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:06.209435  142733 host.go:66] Checking if "ha-487903" exists ...
	I1119 23:07:06.209634  142733 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903 for IP: 192.168.39.160
	I1119 23:07:06.209644  142733 certs.go:195] generating shared ca certs ...
	I1119 23:07:06.209656  142733 certs.go:227] acquiring lock for ca certs: {Name:mk7fd1f1adfef6333505c39c1a982465562c820e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:07:06.209760  142733 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key
	I1119 23:07:06.209804  142733 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key
	I1119 23:07:06.209811  142733 certs.go:257] generating profile certs ...
	I1119 23:07:06.209893  142733 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key
	I1119 23:07:06.209959  142733 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key.0aa3aad5
	I1119 23:07:06.210018  142733 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key
	I1119 23:07:06.210035  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1119 23:07:06.210054  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1119 23:07:06.210067  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1119 23:07:06.210080  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1119 23:07:06.210091  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1119 23:07:06.210102  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1119 23:07:06.210114  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1119 23:07:06.210126  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1119 23:07:06.210182  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem (1338 bytes)
	W1119 23:07:06.210223  142733 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369_empty.pem, impossibly tiny 0 bytes
	I1119 23:07:06.210235  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 23:07:06.210266  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem (1078 bytes)
	I1119 23:07:06.210291  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem (1123 bytes)
	I1119 23:07:06.210312  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem (1675 bytes)
	I1119 23:07:06.210372  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:07:06.210412  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:07:06.210426  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem -> /usr/share/ca-certificates/121369.pem
	I1119 23:07:06.210444  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /usr/share/ca-certificates/1213692.pem
	I1119 23:07:06.213240  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:07:06.213640  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:07:06.213661  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:07:06.213778  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:07:06.286328  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1119 23:07:06.292502  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1119 23:07:06.306380  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1119 23:07:06.311916  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1119 23:07:06.325372  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1119 23:07:06.331268  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1119 23:07:06.346732  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1119 23:07:06.351946  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1119 23:07:06.366848  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1119 23:07:06.372483  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1119 23:07:06.389518  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1119 23:07:06.395938  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1119 23:07:06.409456  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 23:07:06.450401  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 23:07:06.486719  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 23:07:06.523798  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 23:07:06.561368  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 23:07:06.599512  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 23:07:06.634946  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 23:07:06.670031  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 23:07:06.704068  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 23:07:06.735677  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem --> /usr/share/ca-certificates/121369.pem (1338 bytes)
	I1119 23:07:06.768990  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /usr/share/ca-certificates/1213692.pem (1708 bytes)
	I1119 23:07:06.806854  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1119 23:07:06.832239  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1119 23:07:06.856375  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1119 23:07:06.879310  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1119 23:07:06.902404  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1119 23:07:06.927476  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1119 23:07:06.952223  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1119 23:07:06.974196  142733 ssh_runner.go:195] Run: openssl version
	I1119 23:07:06.981644  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 23:07:06.999412  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:07:07.005373  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:07:07.005446  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:07:07.013895  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 23:07:07.031130  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121369.pem && ln -fs /usr/share/ca-certificates/121369.pem /etc/ssl/certs/121369.pem"
	I1119 23:07:07.046043  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121369.pem
	I1119 23:07:07.051937  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:54 /usr/share/ca-certificates/121369.pem
	I1119 23:07:07.052014  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121369.pem
	I1119 23:07:07.059543  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/121369.pem /etc/ssl/certs/51391683.0"
	I1119 23:07:07.078500  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1213692.pem && ln -fs /usr/share/ca-certificates/1213692.pem /etc/ssl/certs/1213692.pem"
	I1119 23:07:07.093375  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1213692.pem
	I1119 23:07:07.099508  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:54 /usr/share/ca-certificates/1213692.pem
	I1119 23:07:07.099578  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1213692.pem
	I1119 23:07:07.107551  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1213692.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 23:07:07.123243  142733 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 23:07:07.129696  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 23:07:07.137849  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 23:07:07.145809  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 23:07:07.153731  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 23:07:07.161120  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 23:07:07.168309  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 23:07:07.176142  142733 kubeadm.go:935] updating node {m03 192.168.39.160 8443 v1.34.1 crio true true} ...
	I1119 23:07:07.176256  142733 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-487903-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.160
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 23:07:07.176285  142733 kube-vip.go:115] generating kube-vip config ...
	I1119 23:07:07.176329  142733 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1119 23:07:07.203479  142733 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1119 23:07:07.203570  142733 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1119 23:07:07.203646  142733 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 23:07:07.217413  142733 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 23:07:07.217503  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1119 23:07:07.230746  142733 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1119 23:07:07.256658  142733 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 23:07:07.282507  142733 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1119 23:07:07.305975  142733 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1119 23:07:07.311016  142733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:07:07.328648  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:07:07.494364  142733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:07:07.517777  142733 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 23:07:07.518159  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:07.518271  142733 cache.go:107] acquiring lock: {Name:mk7073b8eca670d2c11dece9275947688dc7c859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 23:07:07.518379  142733 cache.go:115] /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1119 23:07:07.518395  142733 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 133.678ยตs
	I1119 23:07:07.518407  142733 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1119 23:07:07.518421  142733 cache.go:87] Successfully saved all images to host disk.
	I1119 23:07:07.518647  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:07.520684  142733 out.go:179] * Verifying Kubernetes components...
	I1119 23:07:07.520832  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:07:07.521966  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:07:07.523804  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:07:07.524372  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:07:07.524416  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:07:07.524599  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:07:07.723792  142733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:07:07.724326  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:07:07.724350  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:07:07.726364  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:07:07.728774  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:07:07.729239  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:07:07.729270  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:07:07.729424  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:07:07.746212  142733 kapi.go:59] client config for ha-487903: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key", CAFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1119 23:07:07.746278  142733 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.15:8443
	I1119 23:07:07.746586  142733 node_ready.go:35] waiting up to 6m0s for node "ha-487903-m03" to be "Ready" ...
	I1119 23:07:07.858504  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:07:07.858530  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:07:07.860355  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:07:07.862516  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:07.862974  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:07.863000  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:07.863200  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/id_rsa Username:docker}
	I1119 23:07:08.011441  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:07:08.011468  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:07:08.013393  142733 cache_images.go:264] succeeded pushing to: ha-487903 ha-487903-m02 ha-487903-m03
	W1119 23:07:09.751904  142733 node_ready.go:57] node "ha-487903-m03" has "Ready":"Unknown" status (will retry)
	W1119 23:07:12.252353  142733 node_ready.go:57] node "ha-487903-m03" has "Ready":"Unknown" status (will retry)
	W1119 23:07:14.254075  142733 node_ready.go:57] node "ha-487903-m03" has "Ready":"Unknown" status (will retry)
	W1119 23:07:16.256443  142733 node_ready.go:57] node "ha-487903-m03" has "Ready":"Unknown" status (will retry)
	W1119 23:07:18.752485  142733 node_ready.go:57] node "ha-487903-m03" has "Ready":"Unknown" status (will retry)
	I1119 23:07:19.751738  142733 node_ready.go:49] node "ha-487903-m03" is "Ready"
	I1119 23:07:19.751783  142733 node_ready.go:38] duration metric: took 12.005173883s for node "ha-487903-m03" to be "Ready" ...
	I1119 23:07:19.751803  142733 api_server.go:52] waiting for apiserver process to appear ...
	I1119 23:07:19.751911  142733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 23:07:19.833604  142733 api_server.go:72] duration metric: took 12.315777974s to wait for apiserver process to appear ...
	I1119 23:07:19.833635  142733 api_server.go:88] waiting for apiserver healthz status ...
	I1119 23:07:19.833668  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:07:19.841482  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 200:
	ok
	I1119 23:07:19.842905  142733 api_server.go:141] control plane version: v1.34.1
	I1119 23:07:19.842932  142733 api_server.go:131] duration metric: took 9.287176ms to wait for apiserver health ...
	I1119 23:07:19.842951  142733 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 23:07:19.855636  142733 system_pods.go:59] 26 kube-system pods found
	I1119 23:07:19.855671  142733 system_pods.go:61] "coredns-66bc5c9577-5gt2t" [4a5bca7b-6369-4f70-b467-829bf0c07711] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:07:19.855679  142733 system_pods.go:61] "coredns-66bc5c9577-zjxkb" [9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5] Running
	I1119 23:07:19.855689  142733 system_pods.go:61] "etcd-ha-487903" [c5b86825-5332-4f13-9566-5427deb2b911] Running
	I1119 23:07:19.855695  142733 system_pods.go:61] "etcd-ha-487903-m02" [a83259c3-4655-44c3-99ac-0f3a1c270ce1] Running
	I1119 23:07:19.855700  142733 system_pods.go:61] "etcd-ha-487903-m03" [1569e0f7-59af-4ee0-8f74-bcc25085147d] Running
	I1119 23:07:19.855705  142733 system_pods.go:61] "kindnet-9zx8x" [afdae080-5eff-4da4-8aff-726047c47eee] Running
	I1119 23:07:19.855710  142733 system_pods.go:61] "kindnet-kslhw" [4b82f473-b098-45a9-adf5-8c239353c3cd] Running
	I1119 23:07:19.855714  142733 system_pods.go:61] "kindnet-p9nqh" [1dd7683b-c7e7-487c-904a-506a24f833d8] Running
	I1119 23:07:19.855724  142733 system_pods.go:61] "kindnet-s9k2l" [98a511b0-fa67-4874-8013-e8a4a0adf5f1] Running
	I1119 23:07:19.855733  142733 system_pods.go:61] "kube-apiserver-ha-487903" [ab2ffc5e-d01f-4f0d-840e-f029adf3e0c1] Running
	I1119 23:07:19.855738  142733 system_pods.go:61] "kube-apiserver-ha-487903-m02" [cf63e467-a8e4-4ef3-910a-e35e6e75afdc] Running
	I1119 23:07:19.855743  142733 system_pods.go:61] "kube-apiserver-ha-487903-m03" [a121515e-29eb-42b5-b153-f7a0046ac5c2] Running
	I1119 23:07:19.855747  142733 system_pods.go:61] "kube-controller-manager-ha-487903" [8c9a78c7-ca0e-42cd-90e1-c4c6496de2d8] Running
	I1119 23:07:19.855753  142733 system_pods.go:61] "kube-controller-manager-ha-487903-m02" [bb173b4b-c202-408f-941e-f0af81d49115] Running
	I1119 23:07:19.855760  142733 system_pods.go:61] "kube-controller-manager-ha-487903-m03" [065ea88f-3d45-429d-8e50-ca0aa1a73bcf] Running
	I1119 23:07:19.855764  142733 system_pods.go:61] "kube-proxy-77wjf" [cff3e933-3100-44ce-9215-9db308c328d9] Running
	I1119 23:07:19.855769  142733 system_pods.go:61] "kube-proxy-fk7mh" [8743ca8a-c5e5-4da6-a983-a6191d2a852a] Running
	I1119 23:07:19.855774  142733 system_pods.go:61] "kube-proxy-tkx9r" [93e33992-972a-4c46-8ecb-143a0746d256] Running
	I1119 23:07:19.855778  142733 system_pods.go:61] "kube-proxy-zxtk6" [a2e2aa8e-2dc5-4233-a89a-da39ab61fd72] Running
	I1119 23:07:19.855783  142733 system_pods.go:61] "kube-scheduler-ha-487903" [9adc6023-3fda-4823-9f02-f9ec96db2287] Running
	I1119 23:07:19.855793  142733 system_pods.go:61] "kube-scheduler-ha-487903-m02" [45efe4be-fc82-4bc3-9c0a-a731153d7a72] Running
	I1119 23:07:19.855797  142733 system_pods.go:61] "kube-scheduler-ha-487903-m03" [bdb98728-6527-4ad5-9a13-3b7d2aba1643] Running
	I1119 23:07:19.855802  142733 system_pods.go:61] "kube-vip-ha-487903" [38307c4c-3c7f-4880-a86f-c8066b3da90a] Running
	I1119 23:07:19.855806  142733 system_pods.go:61] "kube-vip-ha-487903-m02" [a4dc6a51-edac-4815-8723-d8fb6f7b6935] Running
	I1119 23:07:19.855814  142733 system_pods.go:61] "kube-vip-ha-487903-m03" [45a6021a-3c71-47ce-9553-d75c327c14f3] Running
	I1119 23:07:19.855818  142733 system_pods.go:61] "storage-provisioner" [2f8e8800-093c-4c68-ac9c-300049543509] Running
	I1119 23:07:19.855827  142733 system_pods.go:74] duration metric: took 12.86809ms to wait for pod list to return data ...
	I1119 23:07:19.855842  142733 default_sa.go:34] waiting for default service account to be created ...
	I1119 23:07:19.860573  142733 default_sa.go:45] found service account: "default"
	I1119 23:07:19.860597  142733 default_sa.go:55] duration metric: took 4.749483ms for default service account to be created ...
	I1119 23:07:19.860606  142733 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 23:07:19.870790  142733 system_pods.go:86] 26 kube-system pods found
	I1119 23:07:19.870825  142733 system_pods.go:89] "coredns-66bc5c9577-5gt2t" [4a5bca7b-6369-4f70-b467-829bf0c07711] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:07:19.870831  142733 system_pods.go:89] "coredns-66bc5c9577-zjxkb" [9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5] Running
	I1119 23:07:19.870836  142733 system_pods.go:89] "etcd-ha-487903" [c5b86825-5332-4f13-9566-5427deb2b911] Running
	I1119 23:07:19.870840  142733 system_pods.go:89] "etcd-ha-487903-m02" [a83259c3-4655-44c3-99ac-0f3a1c270ce1] Running
	I1119 23:07:19.870843  142733 system_pods.go:89] "etcd-ha-487903-m03" [1569e0f7-59af-4ee0-8f74-bcc25085147d] Running
	I1119 23:07:19.870847  142733 system_pods.go:89] "kindnet-9zx8x" [afdae080-5eff-4da4-8aff-726047c47eee] Running
	I1119 23:07:19.870851  142733 system_pods.go:89] "kindnet-kslhw" [4b82f473-b098-45a9-adf5-8c239353c3cd] Running
	I1119 23:07:19.870854  142733 system_pods.go:89] "kindnet-p9nqh" [1dd7683b-c7e7-487c-904a-506a24f833d8] Running
	I1119 23:07:19.870857  142733 system_pods.go:89] "kindnet-s9k2l" [98a511b0-fa67-4874-8013-e8a4a0adf5f1] Running
	I1119 23:07:19.870861  142733 system_pods.go:89] "kube-apiserver-ha-487903" [ab2ffc5e-d01f-4f0d-840e-f029adf3e0c1] Running
	I1119 23:07:19.870865  142733 system_pods.go:89] "kube-apiserver-ha-487903-m02" [cf63e467-a8e4-4ef3-910a-e35e6e75afdc] Running
	I1119 23:07:19.870870  142733 system_pods.go:89] "kube-apiserver-ha-487903-m03" [a121515e-29eb-42b5-b153-f7a0046ac5c2] Running
	I1119 23:07:19.870895  142733 system_pods.go:89] "kube-controller-manager-ha-487903" [8c9a78c7-ca0e-42cd-90e1-c4c6496de2d8] Running
	I1119 23:07:19.870902  142733 system_pods.go:89] "kube-controller-manager-ha-487903-m02" [bb173b4b-c202-408f-941e-f0af81d49115] Running
	I1119 23:07:19.870911  142733 system_pods.go:89] "kube-controller-manager-ha-487903-m03" [065ea88f-3d45-429d-8e50-ca0aa1a73bcf] Running
	I1119 23:07:19.870916  142733 system_pods.go:89] "kube-proxy-77wjf" [cff3e933-3100-44ce-9215-9db308c328d9] Running
	I1119 23:07:19.870924  142733 system_pods.go:89] "kube-proxy-fk7mh" [8743ca8a-c5e5-4da6-a983-a6191d2a852a] Running
	I1119 23:07:19.870929  142733 system_pods.go:89] "kube-proxy-tkx9r" [93e33992-972a-4c46-8ecb-143a0746d256] Running
	I1119 23:07:19.870936  142733 system_pods.go:89] "kube-proxy-zxtk6" [a2e2aa8e-2dc5-4233-a89a-da39ab61fd72] Running
	I1119 23:07:19.870941  142733 system_pods.go:89] "kube-scheduler-ha-487903" [9adc6023-3fda-4823-9f02-f9ec96db2287] Running
	I1119 23:07:19.870946  142733 system_pods.go:89] "kube-scheduler-ha-487903-m02" [45efe4be-fc82-4bc3-9c0a-a731153d7a72] Running
	I1119 23:07:19.870953  142733 system_pods.go:89] "kube-scheduler-ha-487903-m03" [bdb98728-6527-4ad5-9a13-3b7d2aba1643] Running
	I1119 23:07:19.870957  142733 system_pods.go:89] "kube-vip-ha-487903" [38307c4c-3c7f-4880-a86f-c8066b3da90a] Running
	I1119 23:07:19.870963  142733 system_pods.go:89] "kube-vip-ha-487903-m02" [a4dc6a51-edac-4815-8723-d8fb6f7b6935] Running
	I1119 23:07:19.870966  142733 system_pods.go:89] "kube-vip-ha-487903-m03" [45a6021a-3c71-47ce-9553-d75c327c14f3] Running
	I1119 23:07:19.870969  142733 system_pods.go:89] "storage-provisioner" [2f8e8800-093c-4c68-ac9c-300049543509] Running
	I1119 23:07:19.870982  142733 system_pods.go:126] duration metric: took 10.369487ms to wait for k8s-apps to be running ...
	I1119 23:07:19.870995  142733 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 23:07:19.871070  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 23:07:19.923088  142733 system_svc.go:56] duration metric: took 52.080591ms WaitForService to wait for kubelet
	I1119 23:07:19.923137  142733 kubeadm.go:587] duration metric: took 12.405311234s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 23:07:19.923168  142733 node_conditions.go:102] verifying NodePressure condition ...
	I1119 23:07:19.930259  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:07:19.930299  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:07:19.930316  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:07:19.930323  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:07:19.930329  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:07:19.930334  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:07:19.930343  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:07:19.930352  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:07:19.930359  142733 node_conditions.go:105] duration metric: took 7.184829ms to run NodePressure ...
	I1119 23:07:19.930381  142733 start.go:242] waiting for startup goroutines ...
	I1119 23:07:19.930425  142733 start.go:256] writing updated cluster config ...
	I1119 23:07:19.932180  142733 out.go:203] 
	I1119 23:07:19.934088  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:19.934226  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:07:19.935991  142733 out.go:179] * Starting "ha-487903-m04" worker node in "ha-487903" cluster
	I1119 23:07:19.937566  142733 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:07:19.937584  142733 cache.go:65] Caching tarball of preloaded images
	I1119 23:07:19.937693  142733 preload.go:238] Found /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 23:07:19.937716  142733 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 23:07:19.937810  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:07:19.938027  142733 start.go:360] acquireMachinesLock for ha-487903-m04: {Name:mk31ae761a53f3900bcf08a8c89eb1fc69e23fe3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1119 23:07:19.938076  142733 start.go:364] duration metric: took 28.868ยตs to acquireMachinesLock for "ha-487903-m04"
	I1119 23:07:19.938095  142733 start.go:96] Skipping create...Using existing machine configuration
	I1119 23:07:19.938109  142733 fix.go:54] fixHost starting: m04
	I1119 23:07:19.940296  142733 fix.go:112] recreateIfNeeded on ha-487903-m04: state=Stopped err=<nil>
	W1119 23:07:19.940327  142733 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 23:07:19.942168  142733 out.go:252] * Restarting existing kvm2 VM for "ha-487903-m04" ...
	I1119 23:07:19.942220  142733 main.go:143] libmachine: starting domain...
	I1119 23:07:19.942265  142733 main.go:143] libmachine: ensuring networks are active...
	I1119 23:07:19.943145  142733 main.go:143] libmachine: Ensuring network default is active
	I1119 23:07:19.943566  142733 main.go:143] libmachine: Ensuring network mk-ha-487903 is active
	I1119 23:07:19.944170  142733 main.go:143] libmachine: getting domain XML...
	I1119 23:07:19.945811  142733 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>ha-487903-m04</name>
	  <uuid>2ce148a1-b982-46f6-ada0-6a5a5b14ddce</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/ha-487903-m04.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:eb:f3:c3'/>
	      <source network='mk-ha-487903'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:03:3a:d4'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1119 23:07:21.541216  142733 main.go:143] libmachine: waiting for domain to start...
	I1119 23:07:21.542947  142733 main.go:143] libmachine: domain is now running
	I1119 23:07:21.542968  142733 main.go:143] libmachine: waiting for IP...
	I1119 23:07:21.543929  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:21.544529  142733 main.go:143] libmachine: domain ha-487903-m04 has current primary IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:21.544546  142733 main.go:143] libmachine: found domain IP: 192.168.39.187
	I1119 23:07:21.544554  142733 main.go:143] libmachine: reserving static IP address...
	I1119 23:07:21.545091  142733 main.go:143] libmachine: found host DHCP lease matching {name: "ha-487903-m04", mac: "52:54:00:eb:f3:c3", ip: "192.168.39.187"} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:51:45 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:21.545120  142733 main.go:143] libmachine: skip adding static IP to network mk-ha-487903 - found existing host DHCP lease matching {name: "ha-487903-m04", mac: "52:54:00:eb:f3:c3", ip: "192.168.39.187"}
	I1119 23:07:21.545133  142733 main.go:143] libmachine: reserved static IP address 192.168.39.187 for domain ha-487903-m04
	I1119 23:07:21.545137  142733 main.go:143] libmachine: waiting for SSH...
	I1119 23:07:21.545142  142733 main.go:143] libmachine: Getting to WaitForSSH function...
	I1119 23:07:21.547650  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:21.548218  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:51:45 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:21.548249  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:21.548503  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:21.548718  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1119 23:07:21.548730  142733 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1119 23:07:24.652184  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.187:22: connect: no route to host
	I1119 23:07:30.732203  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.187:22: connect: no route to host
	I1119 23:07:34.764651  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.187:22: connect: connection refused
	I1119 23:07:37.880284  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:07:37.884099  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:37.884565  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:37.884591  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:37.884934  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:07:37.885280  142733 machine.go:94] provisionDockerMachine start ...
	I1119 23:07:37.887971  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:37.888368  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:37.888391  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:37.888542  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:37.888720  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1119 23:07:37.888729  142733 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 23:07:37.998350  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1119 23:07:37.998394  142733 buildroot.go:166] provisioning hostname "ha-487903-m04"
	I1119 23:07:38.002080  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.002563  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:38.002588  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.002794  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:38.003043  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1119 23:07:38.003057  142733 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-487903-m04 && echo "ha-487903-m04" | sudo tee /etc/hostname
	I1119 23:07:38.135349  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-487903-m04
	
	I1119 23:07:38.138757  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.139357  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:38.139392  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.139707  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:38.140010  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1119 23:07:38.140053  142733 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-487903-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-487903-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-487903-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 23:07:38.264087  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:07:38.264126  142733 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21918-117497/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-117497/.minikube}
	I1119 23:07:38.264149  142733 buildroot.go:174] setting up certificates
	I1119 23:07:38.264161  142733 provision.go:84] configureAuth start
	I1119 23:07:38.267541  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.268176  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:38.268215  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.270752  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.271136  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:38.271156  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.271421  142733 provision.go:143] copyHostCerts
	I1119 23:07:38.271453  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:07:38.271483  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem, removing ...
	I1119 23:07:38.271492  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:07:38.271573  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem (1675 bytes)
	I1119 23:07:38.271646  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:07:38.271664  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem, removing ...
	I1119 23:07:38.271667  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:07:38.271693  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem (1078 bytes)
	I1119 23:07:38.271735  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:07:38.271751  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem, removing ...
	I1119 23:07:38.271757  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:07:38.271779  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem (1123 bytes)
	I1119 23:07:38.271823  142733 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem org=jenkins.ha-487903-m04 san=[127.0.0.1 192.168.39.187 ha-487903-m04 localhost minikube]
	I1119 23:07:38.932314  142733 provision.go:177] copyRemoteCerts
	I1119 23:07:38.932380  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 23:07:38.935348  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.935810  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:38.935836  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.936006  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/id_rsa Username:docker}
	I1119 23:07:39.025808  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1119 23:07:39.025896  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 23:07:39.060783  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1119 23:07:39.060907  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1119 23:07:39.093470  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1119 23:07:39.093540  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1119 23:07:39.126116  142733 provision.go:87] duration metric: took 861.930238ms to configureAuth
	I1119 23:07:39.126158  142733 buildroot.go:189] setting minikube options for container-runtime
	I1119 23:07:39.126455  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:39.129733  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.130126  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:39.130155  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.130312  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:39.130560  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1119 23:07:39.130587  142733 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 23:07:39.433038  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 23:07:39.433084  142733 machine.go:97] duration metric: took 1.547777306s to provisionDockerMachine
	I1119 23:07:39.433101  142733 start.go:293] postStartSetup for "ha-487903-m04" (driver="kvm2")
	I1119 23:07:39.433114  142733 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 23:07:39.433178  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 23:07:39.436063  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.436658  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:39.436689  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.436985  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/id_rsa Username:docker}
	I1119 23:07:39.524100  142733 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 23:07:39.529723  142733 info.go:137] Remote host: Buildroot 2025.02
	I1119 23:07:39.529752  142733 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/addons for local assets ...
	I1119 23:07:39.529847  142733 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/files for local assets ...
	I1119 23:07:39.529973  142733 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> 1213692.pem in /etc/ssl/certs
	I1119 23:07:39.529988  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /etc/ssl/certs/1213692.pem
	I1119 23:07:39.530101  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 23:07:39.544274  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:07:39.576039  142733 start.go:296] duration metric: took 142.916645ms for postStartSetup
	I1119 23:07:39.576112  142733 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1119 23:07:39.578695  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.579305  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:39.579334  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.579504  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/id_rsa Username:docker}
	I1119 23:07:39.668947  142733 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I1119 23:07:39.669041  142733 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1119 23:07:39.733896  142733 fix.go:56] duration metric: took 19.795762355s for fixHost
	I1119 23:07:39.737459  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.738018  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:39.738061  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.738362  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:39.738661  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1119 23:07:39.738687  142733 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1119 23:07:39.869213  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763593659.839682658
	
	I1119 23:07:39.869234  142733 fix.go:216] guest clock: 1763593659.839682658
	I1119 23:07:39.869241  142733 fix.go:229] Guest: 2025-11-19 23:07:39.839682658 +0000 UTC Remote: 2025-11-19 23:07:39.733931353 +0000 UTC m=+107.078175487 (delta=105.751305ms)
	I1119 23:07:39.869257  142733 fix.go:200] guest clock delta is within tolerance: 105.751305ms
	I1119 23:07:39.869262  142733 start.go:83] releasing machines lock for "ha-487903-m04", held for 19.931174771s
	I1119 23:07:39.872591  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.873064  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:39.873085  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.875110  142733 out.go:179] * Found network options:
	I1119 23:07:39.876331  142733 out.go:179]   - NO_PROXY=192.168.39.15,192.168.39.191,192.168.39.160
	W1119 23:07:39.877435  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:07:39.877458  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:07:39.877478  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:07:39.877889  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:07:39.877920  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:07:39.877932  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	I1119 23:07:39.877962  142733 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 23:07:39.877987  142733 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 23:07:39.881502  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.881991  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.882088  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:39.882128  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.882283  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/id_rsa Username:docker}
	I1119 23:07:39.882500  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:39.882524  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.882696  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/id_rsa Username:docker}
	I1119 23:07:40.118089  142733 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 23:07:40.126955  142733 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 23:07:40.127054  142733 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 23:07:40.150315  142733 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 23:07:40.150351  142733 start.go:496] detecting cgroup driver to use...
	I1119 23:07:40.150436  142733 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 23:07:40.176112  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 23:07:40.195069  142733 docker.go:218] disabling cri-docker service (if available) ...
	I1119 23:07:40.195148  142733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 23:07:40.217113  142733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 23:07:40.240578  142733 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 23:07:40.404108  142733 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 23:07:40.642170  142733 docker.go:234] disabling docker service ...
	I1119 23:07:40.642260  142733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 23:07:40.659709  142733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 23:07:40.677698  142733 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 23:07:40.845769  142733 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 23:07:41.005373  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 23:07:41.028115  142733 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 23:07:41.057337  142733 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 23:07:41.057425  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:41.072373  142733 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 23:07:41.072466  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:41.086681  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:41.100921  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:41.115817  142733 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 23:07:41.132398  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:41.149261  142733 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:41.174410  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:41.189666  142733 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 23:07:41.202599  142733 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1119 23:07:41.202679  142733 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1119 23:07:41.228059  142733 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 23:07:41.243031  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:07:41.403712  142733 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 23:07:41.527678  142733 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 23:07:41.527765  142733 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 23:07:41.534539  142733 start.go:564] Will wait 60s for crictl version
	I1119 23:07:41.534620  142733 ssh_runner.go:195] Run: which crictl
	I1119 23:07:41.539532  142733 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1119 23:07:41.585994  142733 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1119 23:07:41.586086  142733 ssh_runner.go:195] Run: crio --version
	I1119 23:07:41.621736  142733 ssh_runner.go:195] Run: crio --version
	I1119 23:07:41.656086  142733 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1119 23:07:41.657482  142733 out.go:179]   - env NO_PROXY=192.168.39.15
	I1119 23:07:41.658756  142733 out.go:179]   - env NO_PROXY=192.168.39.15,192.168.39.191
	I1119 23:07:41.659970  142733 out.go:179]   - env NO_PROXY=192.168.39.15,192.168.39.191,192.168.39.160
	I1119 23:07:41.664105  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:41.664530  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:41.664550  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:41.664716  142733 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1119 23:07:41.670624  142733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:07:41.688618  142733 mustload.go:66] Loading cluster: ha-487903
	I1119 23:07:41.688858  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:41.690292  142733 host.go:66] Checking if "ha-487903" exists ...
	I1119 23:07:41.690482  142733 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903 for IP: 192.168.39.187
	I1119 23:07:41.690491  142733 certs.go:195] generating shared ca certs ...
	I1119 23:07:41.690504  142733 certs.go:227] acquiring lock for ca certs: {Name:mk7fd1f1adfef6333505c39c1a982465562c820e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:07:41.690631  142733 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key
	I1119 23:07:41.690692  142733 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key
	I1119 23:07:41.690711  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1119 23:07:41.690731  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1119 23:07:41.690750  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1119 23:07:41.690768  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1119 23:07:41.690840  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem (1338 bytes)
	W1119 23:07:41.690886  142733 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369_empty.pem, impossibly tiny 0 bytes
	I1119 23:07:41.690897  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 23:07:41.690917  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem (1078 bytes)
	I1119 23:07:41.690937  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem (1123 bytes)
	I1119 23:07:41.690958  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem (1675 bytes)
	I1119 23:07:41.690994  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:07:41.691025  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:07:41.691038  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem -> /usr/share/ca-certificates/121369.pem
	I1119 23:07:41.691048  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /usr/share/ca-certificates/1213692.pem
	I1119 23:07:41.691068  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 23:07:41.726185  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 23:07:41.762445  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 23:07:41.804578  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 23:07:41.841391  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 23:07:41.881178  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem --> /usr/share/ca-certificates/121369.pem (1338 bytes)
	I1119 23:07:41.917258  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /usr/share/ca-certificates/1213692.pem (1708 bytes)
	I1119 23:07:41.953489  142733 ssh_runner.go:195] Run: openssl version
	I1119 23:07:41.961333  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 23:07:41.977066  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:07:41.983550  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:07:41.983610  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:07:41.991656  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 23:07:42.006051  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121369.pem && ln -fs /usr/share/ca-certificates/121369.pem /etc/ssl/certs/121369.pem"
	I1119 23:07:42.021516  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121369.pem
	I1119 23:07:42.028801  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:54 /usr/share/ca-certificates/121369.pem
	I1119 23:07:42.028900  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121369.pem
	I1119 23:07:42.036899  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/121369.pem /etc/ssl/certs/51391683.0"
	I1119 23:07:42.052553  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1213692.pem && ln -fs /usr/share/ca-certificates/1213692.pem /etc/ssl/certs/1213692.pem"
	I1119 23:07:42.067472  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1213692.pem
	I1119 23:07:42.073674  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:54 /usr/share/ca-certificates/1213692.pem
	I1119 23:07:42.073751  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1213692.pem
	I1119 23:07:42.081607  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1213692.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 23:07:42.096183  142733 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 23:07:42.101534  142733 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 23:07:42.101590  142733 kubeadm.go:935] updating node {m04 192.168.39.187 0 v1.34.1 crio false true} ...
	I1119 23:07:42.101683  142733 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-487903-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 23:07:42.101762  142733 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 23:07:42.115471  142733 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 23:07:42.115548  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1119 23:07:42.129019  142733 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1119 23:07:42.153030  142733 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 23:07:42.178425  142733 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1119 23:07:42.183443  142733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:07:42.200493  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:07:42.356810  142733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:07:42.394017  142733 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1119 23:07:42.394368  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:42.394458  142733 cache.go:107] acquiring lock: {Name:mk7073b8eca670d2c11dece9275947688dc7c859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 23:07:42.394553  142733 cache.go:115] /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1119 23:07:42.394567  142733 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 116.988ยตs
	I1119 23:07:42.394578  142733 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1119 23:07:42.394596  142733 cache.go:87] Successfully saved all images to host disk.
	I1119 23:07:42.394838  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:42.395796  142733 out.go:179] * Verifying Kubernetes components...
	I1119 23:07:42.397077  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:07:42.397151  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:07:42.400663  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:07:42.401297  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:07:42.401366  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:07:42.401574  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:07:42.612769  142733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:07:42.613454  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:07:42.613478  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:07:42.615709  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:07:42.618644  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:07:42.619227  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:07:42.619265  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:07:42.619437  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:07:42.650578  142733 kapi.go:59] client config for ha-487903: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key", CAFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1119 23:07:42.650662  142733 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.15:8443
	I1119 23:07:42.651008  142733 node_ready.go:35] waiting up to 6m0s for node "ha-487903-m04" to be "Ready" ...
	I1119 23:07:42.759664  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:07:42.759695  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:07:42.762502  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:07:42.766101  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:42.766612  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:42.766645  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:42.766903  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/id_rsa Username:docker}
	I1119 23:07:42.916732  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:07:42.916761  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:07:42.919291  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:07:42.922664  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:42.923283  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:42.923322  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:42.923548  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/id_rsa Username:docker}
	I1119 23:07:43.068345  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:07:43.068378  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:07:43.068389  142733 cache_images.go:264] succeeded pushing to: ha-487903 ha-487903-m02 ha-487903-m03 ha-487903-m04
	I1119 23:07:43.156120  142733 node_ready.go:49] node "ha-487903-m04" is "Ready"
	I1119 23:07:43.156156  142733 node_ready.go:38] duration metric: took 505.123719ms for node "ha-487903-m04" to be "Ready" ...
	I1119 23:07:43.156173  142733 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 23:07:43.156241  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 23:07:43.175222  142733 system_svc.go:56] duration metric: took 19.040723ms WaitForService to wait for kubelet
	I1119 23:07:43.175261  142733 kubeadm.go:587] duration metric: took 781.202644ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 23:07:43.175288  142733 node_conditions.go:102] verifying NodePressure condition ...
	I1119 23:07:43.180835  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:07:43.180870  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:07:43.180910  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:07:43.180916  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:07:43.180924  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:07:43.180942  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:07:43.180953  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:07:43.180959  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:07:43.180965  142733 node_conditions.go:105] duration metric: took 5.670636ms to run NodePressure ...
	I1119 23:07:43.180984  142733 start.go:242] waiting for startup goroutines ...
	I1119 23:07:43.181017  142733 start.go:256] writing updated cluster config ...
	I1119 23:07:43.181360  142733 ssh_runner.go:195] Run: rm -f paused
	I1119 23:07:43.187683  142733 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 23:07:43.188308  142733 kapi.go:59] client config for ha-487903: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key", CAFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1119 23:07:43.202770  142733 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5gt2t" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.210054  142733 pod_ready.go:94] pod "coredns-66bc5c9577-5gt2t" is "Ready"
	I1119 23:07:43.210077  142733 pod_ready.go:86] duration metric: took 7.281319ms for pod "coredns-66bc5c9577-5gt2t" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.210085  142733 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zjxkb" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.216456  142733 pod_ready.go:94] pod "coredns-66bc5c9577-zjxkb" is "Ready"
	I1119 23:07:43.216477  142733 pod_ready.go:86] duration metric: took 6.387459ms for pod "coredns-66bc5c9577-zjxkb" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.220711  142733 pod_ready.go:83] waiting for pod "etcd-ha-487903" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.230473  142733 pod_ready.go:94] pod "etcd-ha-487903" is "Ready"
	I1119 23:07:43.230503  142733 pod_ready.go:86] duration metric: took 9.759051ms for pod "etcd-ha-487903" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.230514  142733 pod_ready.go:83] waiting for pod "etcd-ha-487903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.238350  142733 pod_ready.go:94] pod "etcd-ha-487903-m02" is "Ready"
	I1119 23:07:43.238386  142733 pod_ready.go:86] duration metric: took 7.863104ms for pod "etcd-ha-487903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.238400  142733 pod_ready.go:83] waiting for pod "etcd-ha-487903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.389841  142733 request.go:683] "Waited before sending request" delay="151.318256ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-487903-m03"
	I1119 23:07:43.588929  142733 request.go:683] "Waited before sending request" delay="193.203585ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m03"
	I1119 23:07:43.592859  142733 pod_ready.go:94] pod "etcd-ha-487903-m03" is "Ready"
	I1119 23:07:43.592895  142733 pod_ready.go:86] duration metric: took 354.487844ms for pod "etcd-ha-487903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.789462  142733 request.go:683] "Waited before sending request" delay="196.405608ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1119 23:07:43.797307  142733 pod_ready.go:83] waiting for pod "kube-apiserver-ha-487903" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.989812  142733 request.go:683] "Waited before sending request" delay="192.389949ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-487903"
	I1119 23:07:44.189117  142733 request.go:683] "Waited before sending request" delay="193.300165ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903"
	I1119 23:07:44.194456  142733 pod_ready.go:94] pod "kube-apiserver-ha-487903" is "Ready"
	I1119 23:07:44.194483  142733 pod_ready.go:86] duration metric: took 397.15415ms for pod "kube-apiserver-ha-487903" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:44.194492  142733 pod_ready.go:83] waiting for pod "kube-apiserver-ha-487903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:44.388959  142733 request.go:683] "Waited before sending request" delay="194.329528ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-487903-m02"
	I1119 23:07:44.589884  142733 request.go:683] "Waited before sending request" delay="195.382546ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m02"
	I1119 23:07:44.596472  142733 pod_ready.go:94] pod "kube-apiserver-ha-487903-m02" is "Ready"
	I1119 23:07:44.596506  142733 pod_ready.go:86] duration metric: took 402.007843ms for pod "kube-apiserver-ha-487903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:44.596519  142733 pod_ready.go:83] waiting for pod "kube-apiserver-ha-487903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:44.788946  142733 request.go:683] "Waited before sending request" delay="192.297042ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-487903-m03"
	I1119 23:07:44.988960  142733 request.go:683] "Waited before sending request" delay="194.310641ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m03"
	I1119 23:07:44.996400  142733 pod_ready.go:94] pod "kube-apiserver-ha-487903-m03" is "Ready"
	I1119 23:07:44.996441  142733 pod_ready.go:86] duration metric: took 399.911723ms for pod "kube-apiserver-ha-487903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:45.188855  142733 request.go:683] "Waited before sending request" delay="192.290488ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1119 23:07:45.196689  142733 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-487903" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:45.389182  142733 request.go:683] "Waited before sending request" delay="192.281881ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-487903"
	I1119 23:07:45.589591  142733 request.go:683] "Waited before sending request" delay="194.384266ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903"
	I1119 23:07:45.595629  142733 pod_ready.go:94] pod "kube-controller-manager-ha-487903" is "Ready"
	I1119 23:07:45.595661  142733 pod_ready.go:86] duration metric: took 398.942038ms for pod "kube-controller-manager-ha-487903" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:45.595674  142733 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-487903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:45.789154  142733 request.go:683] "Waited before sending request" delay="193.378185ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-487903-m02"
	I1119 23:07:45.989593  142733 request.go:683] "Waited before sending request" delay="195.373906ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m02"
	I1119 23:07:45.995418  142733 pod_ready.go:94] pod "kube-controller-manager-ha-487903-m02" is "Ready"
	I1119 23:07:45.995451  142733 pod_ready.go:86] duration metric: took 399.769417ms for pod "kube-controller-manager-ha-487903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:45.995462  142733 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-487903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:46.188855  142733 request.go:683] "Waited before sending request" delay="193.309398ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-487903-m03"
	I1119 23:07:46.389512  142733 request.go:683] "Waited before sending request" delay="194.260664ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m03"
	I1119 23:07:46.394287  142733 pod_ready.go:94] pod "kube-controller-manager-ha-487903-m03" is "Ready"
	I1119 23:07:46.394312  142733 pod_ready.go:86] duration metric: took 398.844264ms for pod "kube-controller-manager-ha-487903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:46.589870  142733 request.go:683] "Waited before sending request" delay="195.416046ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1119 23:07:46.597188  142733 pod_ready.go:83] waiting for pod "kube-proxy-77wjf" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:46.789771  142733 request.go:683] "Waited before sending request" delay="192.426623ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-77wjf"
	I1119 23:07:46.989150  142733 request.go:683] "Waited before sending request" delay="193.435229ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m02"
	I1119 23:07:46.993720  142733 pod_ready.go:94] pod "kube-proxy-77wjf" is "Ready"
	I1119 23:07:46.993753  142733 pod_ready.go:86] duration metric: took 396.52945ms for pod "kube-proxy-77wjf" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:46.993765  142733 pod_ready.go:83] waiting for pod "kube-proxy-fk7mh" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:47.189146  142733 request.go:683] "Waited before sending request" delay="195.267437ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fk7mh"
	I1119 23:07:47.388849  142733 request.go:683] "Waited before sending request" delay="192.29395ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903"
	I1119 23:07:47.395640  142733 pod_ready.go:94] pod "kube-proxy-fk7mh" is "Ready"
	I1119 23:07:47.395670  142733 pod_ready.go:86] duration metric: took 401.897062ms for pod "kube-proxy-fk7mh" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:47.395683  142733 pod_ready.go:83] waiting for pod "kube-proxy-tkx9r" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:47.589099  142733 request.go:683] "Waited before sending request" delay="193.31568ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tkx9r"
	I1119 23:07:47.789418  142733 request.go:683] "Waited before sending request" delay="195.323511ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m03"
	I1119 23:07:47.795048  142733 pod_ready.go:94] pod "kube-proxy-tkx9r" is "Ready"
	I1119 23:07:47.795078  142733 pod_ready.go:86] duration metric: took 399.387799ms for pod "kube-proxy-tkx9r" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:47.795088  142733 pod_ready.go:83] waiting for pod "kube-proxy-zxtk6" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:47.989569  142733 request.go:683] "Waited before sending request" delay="194.336733ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zxtk6"
	I1119 23:07:48.189017  142733 request.go:683] "Waited before sending request" delay="192.313826ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m04"
	I1119 23:07:48.194394  142733 pod_ready.go:94] pod "kube-proxy-zxtk6" is "Ready"
	I1119 23:07:48.194435  142733 pod_ready.go:86] duration metric: took 399.338885ms for pod "kube-proxy-zxtk6" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:48.388945  142733 request.go:683] "Waited before sending request" delay="194.328429ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1119 23:07:48.555571  142733 pod_ready.go:83] waiting for pod "kube-scheduler-ha-487903" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:48.789654  142733 request.go:683] "Waited before sending request" delay="195.382731ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903"
	I1119 23:07:48.795196  142733 pod_ready.go:94] pod "kube-scheduler-ha-487903" is "Ready"
	I1119 23:07:48.795234  142733 pod_ready.go:86] duration metric: took 239.629107ms for pod "kube-scheduler-ha-487903" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:48.795246  142733 pod_ready.go:83] waiting for pod "kube-scheduler-ha-487903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:48.989712  142733 request.go:683] "Waited before sending request" delay="194.356732ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-487903-m02"
	I1119 23:07:49.189524  142733 request.go:683] "Waited before sending request" delay="194.365482ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m02"
	I1119 23:07:49.195480  142733 pod_ready.go:94] pod "kube-scheduler-ha-487903-m02" is "Ready"
	I1119 23:07:49.195503  142733 pod_ready.go:86] duration metric: took 400.248702ms for pod "kube-scheduler-ha-487903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:49.195512  142733 pod_ready.go:83] waiting for pod "kube-scheduler-ha-487903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:49.388917  142733 request.go:683] "Waited before sending request" delay="193.285895ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-487903-m03"
	I1119 23:07:49.589644  142733 request.go:683] "Waited before sending request" delay="195.362698ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m03"
	I1119 23:07:49.594210  142733 pod_ready.go:94] pod "kube-scheduler-ha-487903-m03" is "Ready"
	I1119 23:07:49.594248  142733 pod_ready.go:86] duration metric: took 398.725567ms for pod "kube-scheduler-ha-487903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:49.594266  142733 pod_ready.go:40] duration metric: took 6.406545371s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 23:07:49.639756  142733 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 23:07:49.641778  142733 out.go:179] * Done! kubectl is now configured to use "ha-487903" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 23:07:54 ha-487903 crio[1051]: time="2025-11-19 23:07:54.566257485Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763593674566230036,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=96d74783-187c-4c5f-b090-3d65c0d5d9ce name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:07:54 ha-487903 crio[1051]: time="2025-11-19 23:07:54.566810712Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6925abf5-19ec-4f2d-ae30-4a4940b5e6dc name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:07:54 ha-487903 crio[1051]: time="2025-11-19 23:07:54.566868830Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6925abf5-19ec-4f2d-ae30-4a4940b5e6dc name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:07:54 ha-487903 crio[1051]: time="2025-11-19 23:07:54.567303109Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6554703e81880e9f6a21acce786b79533e2a7c53bb8f205f8cf58b3e7f2602bf,PodSandboxId:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763593636083086322,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ecabad51ca157f947db36f5eba7ab52cd91201da10aac9ace5e0c4be262b48,PodSandboxId:270bc5025a2089cafec093032b0ebbddd76667a9be6dc17be0d73dcc82585702,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1763593607117114313,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7b57f96db7-vl8nf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 946ad3f6-2e30-4020-9558-891d0523c640,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4db302f8e1d90f14b4a4931bf9263114ec245c8993010f4eed63ba0b2ff9d17,PodSandboxId:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763593604195293709,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:671e74cfb90edb492f69d2062ab55e74926857b9da6b2bd43dc676ca77a34e95,PodSandboxId:21cea62c9e5ab973727fcb21a109a1a976f2836eed75c3f56119049c06989cb9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,State:CONTAINER_RUNNING,CreatedAt:1763593603580117075,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p9nqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dd7683b-c7e7-487c-904a-506a24f833d8,},Annotations:map[string]string{io.kubernetes.container.hash: 127fdb84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e1ce69b078fd9d69a966ab3ae75b0a9a20f1716235f7fd123429ac0fe73bb0b,PodSandboxId:2d9689b8c4fc51a9b8407de5c6767ecfaee3ba4e33596524328f2bcf5d4107fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763593603411005301,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fk7mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8743ca8a-c5e5-4da6-a983-a6191d2a852a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationM
essagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3b8bef3853f0cd8da58227fc3508657f73899cd5de505b5b41e40a5be41f1f,PodSandboxId:02cf6c2f51b7a2e4710eaf470b1a546e8dfca228fd615b28c173739ebc295404,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763593603688687557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zjxkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics
\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c3e00977ee5764d9f023ed9d6a8cff1ae7f4fd8b5be7f36bf0c296650100b,PodSandboxId:acd42dfb49d39f9eed1c02ddf5511235d87334eb812770e7fb44e4707605eadf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763593603469651002,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5gt2t,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bca7b-6369-4f70-b467-829bf0c07711,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407c1906949db307229dfea169cf035af7ee02aa72d4a48566dbe3b8c43189a6,PodSandboxId:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc2152
7912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763593602977408281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a3ebfa791420fd99d627281216572fc41635c59a2e823681f10e0265a5601af,PodSandboxId:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaa
ff2a321dfb6e26b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763593593634321911,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f74b446d5d8cfbd14bd87db207
415a01d3d5d28a2cb58f4b03cd16328535c63,PodSandboxId:aadb913b7f2aa359387ce202375bc4784c495957a61cd14ac26b05dc78dcf173,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce5ff7916975f8df7d464b4e56a967792442b9e79944fd74684cc749b281df38,State:CONTAINER_RUNNING,CreatedAt:1763593573545391057,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2660d05a38d7f409ed63a1278c85d94,},Annotations:map[string]string{io.kubernetes.container.hash: 7c465d42,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fead33c061a4deb0b1eb4ee9dd3e9e724
dade2871a97a7aad79bef05acbd4a07,PodSandboxId:83240b63d40d6dadf9466b9a99213868b8f23ea48c80e114d09241c058d55ba1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763593571304452021,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03498ab9918acb57128aa1e7f285fe26,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d9fc5b2567d02f5794d850ec1778030b3378c830c952d472803d0582d5904b,PodSandboxId:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763593571289637293,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restar
tCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:361486fad16d169560efd4b5ba18c9754dd222d05d290599faad7b9f7ef4862e,PodSandboxId:2ea97d68a5406f530212285ae86cbed35cbeddb5975f19b02bd01f6965702dba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763593571267392885,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85430ee602aa7edb190bbc4c6f215cf4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"cont
ainerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37548c727f81afd8dd40f5cf29964a61c4e4370cbb603376f96287d018620cb4,PodSandboxId:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaaff2a321dfb6e26b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763593571195978986,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,},Annotations:map[string]string{io.kubernetes.contain
er.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6925abf5-19ec-4f2d-ae30-4a4940b5e6dc name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:07:54 ha-487903 crio[1051]: time="2025-11-19 23:07:54.616849409Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ffe99f22-fd16-42f3-a427-74c401a0861a name=/runtime.v1.RuntimeService/Version
	Nov 19 23:07:54 ha-487903 crio[1051]: time="2025-11-19 23:07:54.617087278Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ffe99f22-fd16-42f3-a427-74c401a0861a name=/runtime.v1.RuntimeService/Version
	Nov 19 23:07:54 ha-487903 crio[1051]: time="2025-11-19 23:07:54.618334648Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=09bf51fe-1d04-4ee2-a762-d45c191ebde6 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:07:54 ha-487903 crio[1051]: time="2025-11-19 23:07:54.618919919Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763593674618893486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=09bf51fe-1d04-4ee2-a762-d45c191ebde6 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:07:54 ha-487903 crio[1051]: time="2025-11-19 23:07:54.619825082Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=765d5ff9-118c-4791-801a-00aae989a38b name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:07:54 ha-487903 crio[1051]: time="2025-11-19 23:07:54.619958879Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=765d5ff9-118c-4791-801a-00aae989a38b name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:07:54 ha-487903 crio[1051]: time="2025-11-19 23:07:54.621141101Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6554703e81880e9f6a21acce786b79533e2a7c53bb8f205f8cf58b3e7f2602bf,PodSandboxId:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763593636083086322,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ecabad51ca157f947db36f5eba7ab52cd91201da10aac9ace5e0c4be262b48,PodSandboxId:270bc5025a2089cafec093032b0ebbddd76667a9be6dc17be0d73dcc82585702,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1763593607117114313,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7b57f96db7-vl8nf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 946ad3f6-2e30-4020-9558-891d0523c640,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4db302f8e1d90f14b4a4931bf9263114ec245c8993010f4eed63ba0b2ff9d17,PodSandboxId:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763593604195293709,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:671e74cfb90edb492f69d2062ab55e74926857b9da6b2bd43dc676ca77a34e95,PodSandboxId:21cea62c9e5ab973727fcb21a109a1a976f2836eed75c3f56119049c06989cb9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,State:CONTAINER_RUNNING,CreatedAt:1763593603580117075,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p9nqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dd7683b-c7e7-487c-904a-506a24f833d8,},Annotations:map[string]string{io.kubernetes.container.hash: 127fdb84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e1ce69b078fd9d69a966ab3ae75b0a9a20f1716235f7fd123429ac0fe73bb0b,PodSandboxId:2d9689b8c4fc51a9b8407de5c6767ecfaee3ba4e33596524328f2bcf5d4107fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763593603411005301,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fk7mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8743ca8a-c5e5-4da6-a983-a6191d2a852a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationM
essagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3b8bef3853f0cd8da58227fc3508657f73899cd5de505b5b41e40a5be41f1f,PodSandboxId:02cf6c2f51b7a2e4710eaf470b1a546e8dfca228fd615b28c173739ebc295404,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763593603688687557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zjxkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics
\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c3e00977ee5764d9f023ed9d6a8cff1ae7f4fd8b5be7f36bf0c296650100b,PodSandboxId:acd42dfb49d39f9eed1c02ddf5511235d87334eb812770e7fb44e4707605eadf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763593603469651002,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5gt2t,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bca7b-6369-4f70-b467-829bf0c07711,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407c1906949db307229dfea169cf035af7ee02aa72d4a48566dbe3b8c43189a6,PodSandboxId:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc2152
7912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763593602977408281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a3ebfa791420fd99d627281216572fc41635c59a2e823681f10e0265a5601af,PodSandboxId:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaa
ff2a321dfb6e26b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763593593634321911,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f74b446d5d8cfbd14bd87db207
415a01d3d5d28a2cb58f4b03cd16328535c63,PodSandboxId:aadb913b7f2aa359387ce202375bc4784c495957a61cd14ac26b05dc78dcf173,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce5ff7916975f8df7d464b4e56a967792442b9e79944fd74684cc749b281df38,State:CONTAINER_RUNNING,CreatedAt:1763593573545391057,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2660d05a38d7f409ed63a1278c85d94,},Annotations:map[string]string{io.kubernetes.container.hash: 7c465d42,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fead33c061a4deb0b1eb4ee9dd3e9e724
dade2871a97a7aad79bef05acbd4a07,PodSandboxId:83240b63d40d6dadf9466b9a99213868b8f23ea48c80e114d09241c058d55ba1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763593571304452021,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03498ab9918acb57128aa1e7f285fe26,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d9fc5b2567d02f5794d850ec1778030b3378c830c952d472803d0582d5904b,PodSandboxId:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763593571289637293,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restar
tCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:361486fad16d169560efd4b5ba18c9754dd222d05d290599faad7b9f7ef4862e,PodSandboxId:2ea97d68a5406f530212285ae86cbed35cbeddb5975f19b02bd01f6965702dba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763593571267392885,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85430ee602aa7edb190bbc4c6f215cf4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"cont
ainerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37548c727f81afd8dd40f5cf29964a61c4e4370cbb603376f96287d018620cb4,PodSandboxId:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaaff2a321dfb6e26b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763593571195978986,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,},Annotations:map[string]string{io.kubernetes.contain
er.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=765d5ff9-118c-4791-801a-00aae989a38b name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:07:54 ha-487903 crio[1051]: time="2025-11-19 23:07:54.667347621Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3fdd8d5e-8e04-40ed-9710-328f210ba3e2 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:07:54 ha-487903 crio[1051]: time="2025-11-19 23:07:54.667443041Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3fdd8d5e-8e04-40ed-9710-328f210ba3e2 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:07:54 ha-487903 crio[1051]: time="2025-11-19 23:07:54.669139219Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6a1c2d99-1eb7-4a46-b395-6ba29e91b86c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:07:54 ha-487903 crio[1051]: time="2025-11-19 23:07:54.670253395Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763593674670194728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6a1c2d99-1eb7-4a46-b395-6ba29e91b86c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:07:54 ha-487903 crio[1051]: time="2025-11-19 23:07:54.670935019Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33304522-ca7e-455b-96a2-2d2f06b43fb6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:07:54 ha-487903 crio[1051]: time="2025-11-19 23:07:54.670987949Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33304522-ca7e-455b-96a2-2d2f06b43fb6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:07:54 ha-487903 crio[1051]: time="2025-11-19 23:07:54.671372954Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6554703e81880e9f6a21acce786b79533e2a7c53bb8f205f8cf58b3e7f2602bf,PodSandboxId:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763593636083086322,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ecabad51ca157f947db36f5eba7ab52cd91201da10aac9ace5e0c4be262b48,PodSandboxId:270bc5025a2089cafec093032b0ebbddd76667a9be6dc17be0d73dcc82585702,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1763593607117114313,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7b57f96db7-vl8nf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 946ad3f6-2e30-4020-9558-891d0523c640,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4db302f8e1d90f14b4a4931bf9263114ec245c8993010f4eed63ba0b2ff9d17,PodSandboxId:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763593604195293709,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:671e74cfb90edb492f69d2062ab55e74926857b9da6b2bd43dc676ca77a34e95,PodSandboxId:21cea62c9e5ab973727fcb21a109a1a976f2836eed75c3f56119049c06989cb9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,State:CONTAINER_RUNNING,CreatedAt:1763593603580117075,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p9nqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dd7683b-c7e7-487c-904a-506a24f833d8,},Annotations:map[string]string{io.kubernetes.container.hash: 127fdb84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e1ce69b078fd9d69a966ab3ae75b0a9a20f1716235f7fd123429ac0fe73bb0b,PodSandboxId:2d9689b8c4fc51a9b8407de5c6767ecfaee3ba4e33596524328f2bcf5d4107fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763593603411005301,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fk7mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8743ca8a-c5e5-4da6-a983-a6191d2a852a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationM
essagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3b8bef3853f0cd8da58227fc3508657f73899cd5de505b5b41e40a5be41f1f,PodSandboxId:02cf6c2f51b7a2e4710eaf470b1a546e8dfca228fd615b28c173739ebc295404,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763593603688687557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zjxkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics
\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c3e00977ee5764d9f023ed9d6a8cff1ae7f4fd8b5be7f36bf0c296650100b,PodSandboxId:acd42dfb49d39f9eed1c02ddf5511235d87334eb812770e7fb44e4707605eadf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763593603469651002,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5gt2t,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bca7b-6369-4f70-b467-829bf0c07711,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407c1906949db307229dfea169cf035af7ee02aa72d4a48566dbe3b8c43189a6,PodSandboxId:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc2152
7912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763593602977408281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a3ebfa791420fd99d627281216572fc41635c59a2e823681f10e0265a5601af,PodSandboxId:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaa
ff2a321dfb6e26b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763593593634321911,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f74b446d5d8cfbd14bd87db207
415a01d3d5d28a2cb58f4b03cd16328535c63,PodSandboxId:aadb913b7f2aa359387ce202375bc4784c495957a61cd14ac26b05dc78dcf173,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce5ff7916975f8df7d464b4e56a967792442b9e79944fd74684cc749b281df38,State:CONTAINER_RUNNING,CreatedAt:1763593573545391057,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2660d05a38d7f409ed63a1278c85d94,},Annotations:map[string]string{io.kubernetes.container.hash: 7c465d42,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fead33c061a4deb0b1eb4ee9dd3e9e724
dade2871a97a7aad79bef05acbd4a07,PodSandboxId:83240b63d40d6dadf9466b9a99213868b8f23ea48c80e114d09241c058d55ba1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763593571304452021,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03498ab9918acb57128aa1e7f285fe26,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d9fc5b2567d02f5794d850ec1778030b3378c830c952d472803d0582d5904b,PodSandboxId:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763593571289637293,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restar
tCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:361486fad16d169560efd4b5ba18c9754dd222d05d290599faad7b9f7ef4862e,PodSandboxId:2ea97d68a5406f530212285ae86cbed35cbeddb5975f19b02bd01f6965702dba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763593571267392885,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85430ee602aa7edb190bbc4c6f215cf4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"cont
ainerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37548c727f81afd8dd40f5cf29964a61c4e4370cbb603376f96287d018620cb4,PodSandboxId:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaaff2a321dfb6e26b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763593571195978986,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,},Annotations:map[string]string{io.kubernetes.contain
er.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=33304522-ca7e-455b-96a2-2d2f06b43fb6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:07:54 ha-487903 crio[1051]: time="2025-11-19 23:07:54.720709241Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b691efdf-51ee-4508-b8a0-712909018192 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:07:54 ha-487903 crio[1051]: time="2025-11-19 23:07:54.721033363Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b691efdf-51ee-4508-b8a0-712909018192 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:07:54 ha-487903 crio[1051]: time="2025-11-19 23:07:54.722552270Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=acdfb968-6ce4-48f1-b03c-7ad03e9202fe name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:07:54 ha-487903 crio[1051]: time="2025-11-19 23:07:54.723109555Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763593674723084755,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=acdfb968-6ce4-48f1-b03c-7ad03e9202fe name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:07:54 ha-487903 crio[1051]: time="2025-11-19 23:07:54.724088302Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=603c16b1-1482-4272-ac12-f25f1a5af7b4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:07:54 ha-487903 crio[1051]: time="2025-11-19 23:07:54.724171069Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=603c16b1-1482-4272-ac12-f25f1a5af7b4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:07:54 ha-487903 crio[1051]: time="2025-11-19 23:07:54.724530553Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6554703e81880e9f6a21acce786b79533e2a7c53bb8f205f8cf58b3e7f2602bf,PodSandboxId:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763593636083086322,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ecabad51ca157f947db36f5eba7ab52cd91201da10aac9ace5e0c4be262b48,PodSandboxId:270bc5025a2089cafec093032b0ebbddd76667a9be6dc17be0d73dcc82585702,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1763593607117114313,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7b57f96db7-vl8nf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 946ad3f6-2e30-4020-9558-891d0523c640,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4db302f8e1d90f14b4a4931bf9263114ec245c8993010f4eed63ba0b2ff9d17,PodSandboxId:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763593604195293709,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:671e74cfb90edb492f69d2062ab55e74926857b9da6b2bd43dc676ca77a34e95,PodSandboxId:21cea62c9e5ab973727fcb21a109a1a976f2836eed75c3f56119049c06989cb9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,State:CONTAINER_RUNNING,CreatedAt:1763593603580117075,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p9nqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dd7683b-c7e7-487c-904a-506a24f833d8,},Annotations:map[string]string{io.kubernetes.container.hash: 127fdb84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e1ce69b078fd9d69a966ab3ae75b0a9a20f1716235f7fd123429ac0fe73bb0b,PodSandboxId:2d9689b8c4fc51a9b8407de5c6767ecfaee3ba4e33596524328f2bcf5d4107fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763593603411005301,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fk7mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8743ca8a-c5e5-4da6-a983-a6191d2a852a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationM
essagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3b8bef3853f0cd8da58227fc3508657f73899cd5de505b5b41e40a5be41f1f,PodSandboxId:02cf6c2f51b7a2e4710eaf470b1a546e8dfca228fd615b28c173739ebc295404,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763593603688687557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zjxkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics
\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c3e00977ee5764d9f023ed9d6a8cff1ae7f4fd8b5be7f36bf0c296650100b,PodSandboxId:acd42dfb49d39f9eed1c02ddf5511235d87334eb812770e7fb44e4707605eadf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763593603469651002,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5gt2t,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bca7b-6369-4f70-b467-829bf0c07711,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407c1906949db307229dfea169cf035af7ee02aa72d4a48566dbe3b8c43189a6,PodSandboxId:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc2152
7912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763593602977408281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a3ebfa791420fd99d627281216572fc41635c59a2e823681f10e0265a5601af,PodSandboxId:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaa
ff2a321dfb6e26b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763593593634321911,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f74b446d5d8cfbd14bd87db207
415a01d3d5d28a2cb58f4b03cd16328535c63,PodSandboxId:aadb913b7f2aa359387ce202375bc4784c495957a61cd14ac26b05dc78dcf173,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce5ff7916975f8df7d464b4e56a967792442b9e79944fd74684cc749b281df38,State:CONTAINER_RUNNING,CreatedAt:1763593573545391057,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2660d05a38d7f409ed63a1278c85d94,},Annotations:map[string]string{io.kubernetes.container.hash: 7c465d42,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fead33c061a4deb0b1eb4ee9dd3e9e724
dade2871a97a7aad79bef05acbd4a07,PodSandboxId:83240b63d40d6dadf9466b9a99213868b8f23ea48c80e114d09241c058d55ba1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763593571304452021,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03498ab9918acb57128aa1e7f285fe26,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d9fc5b2567d02f5794d850ec1778030b3378c830c952d472803d0582d5904b,PodSandboxId:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763593571289637293,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restar
tCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:361486fad16d169560efd4b5ba18c9754dd222d05d290599faad7b9f7ef4862e,PodSandboxId:2ea97d68a5406f530212285ae86cbed35cbeddb5975f19b02bd01f6965702dba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763593571267392885,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85430ee602aa7edb190bbc4c6f215cf4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"cont
ainerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37548c727f81afd8dd40f5cf29964a61c4e4370cbb603376f96287d018620cb4,PodSandboxId:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaaff2a321dfb6e26b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763593571195978986,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,},Annotations:map[string]string{io.kubernetes.contain
er.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=603c16b1-1482-4272-ac12-f25f1a5af7b4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6554703e81880       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      38 seconds ago       Running             storage-provisioner       4                   bcf53581b6e1f       storage-provisioner
	08ecabad51ca1       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   About a minute ago   Running             busybox                   1                   270bc5025a208       busybox-7b57f96db7-vl8nf
	f4db302f8e1d9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   bcf53581b6e1f       storage-provisioner
	cf3b8bef3853f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      About a minute ago   Running             coredns                   1                   02cf6c2f51b7a       coredns-66bc5c9577-zjxkb
	671e74cfb90ed       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      About a minute ago   Running             kindnet-cni               1                   21cea62c9e5ab       kindnet-p9nqh
	323c3e00977ee       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      About a minute ago   Running             coredns                   1                   acd42dfb49d39       coredns-66bc5c9577-5gt2t
	8e1ce69b078fd       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      About a minute ago   Running             kube-proxy                1                   2d9689b8c4fc5       kube-proxy-fk7mh
	407c1906949db       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      About a minute ago   Running             kube-controller-manager   2                   a4df466e854f6       kube-controller-manager-ha-487903
	0a3ebfa791420       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      About a minute ago   Running             kube-apiserver            2                   6d84027fd8d6f       kube-apiserver-ha-487903
	9f74b446d5d8c       ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178     About a minute ago   Running             kube-vip                  1                   aadb913b7f2aa       kube-vip-ha-487903
	fead33c061a4d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      About a minute ago   Running             kube-scheduler            1                   83240b63d40d6       kube-scheduler-ha-487903
	b7d9fc5b2567d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      About a minute ago   Exited              kube-controller-manager   1                   a4df466e854f6       kube-controller-manager-ha-487903
	361486fad16d1       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      About a minute ago   Running             etcd                      1                   2ea97d68a5406       etcd-ha-487903
	37548c727f81a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      About a minute ago   Exited              kube-apiserver            1                   6d84027fd8d6f       kube-apiserver-ha-487903
	
	
	==> coredns [323c3e00977ee5764d9f023ed9d6a8cff1ae7f4fd8b5be7f36bf0c296650100b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51076 - 6967 "HINFO IN 7389388171048239250.1605567939079731882. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.415536075s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [cf3b8bef3853f0cd8da58227fc3508657f73899cd5de505b5b41e40a5be41f1f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44386 - 47339 "HINFO IN 5025386377785033151.6368126768169479003. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.417913634s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-487903
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-487903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=ha-487903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_48_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:47:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-487903
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 23:07:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 23:07:12 +0000   Wed, 19 Nov 2025 22:47:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 23:07:12 +0000   Wed, 19 Nov 2025 22:47:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 23:07:12 +0000   Wed, 19 Nov 2025 22:47:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 23:07:12 +0000   Wed, 19 Nov 2025 22:48:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.15
	  Hostname:    ha-487903
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 a1ad91e99cee4f2a89ceda034e4410c0
	  System UUID:                a1ad91e9-9cee-4f2a-89ce-da034e4410c0
	  Boot ID:                    1b20db97-3ea3-483b-aa28-0753781928f2
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-vl8nf             0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 coredns-66bc5c9577-5gt2t             100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     19m
	  kube-system                 coredns-66bc5c9577-zjxkb             100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     19m
	  kube-system                 etcd-ha-487903                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         19m
	  kube-system                 kindnet-p9nqh                        100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      19m
	  kube-system                 kube-apiserver-ha-487903             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-ha-487903    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-fk7mh                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ha-487903             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-vip-ha-487903                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (9%)  390Mi (13%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 19m                  kube-proxy       
	  Normal   Starting                 69s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    20m (x8 over 20m)    kubelet          Node ha-487903 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 20m                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  20m (x8 over 20m)    kubelet          Node ha-487903 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     20m (x7 over 20m)    kubelet          Node ha-487903 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  20m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeAllocatableEnforced  19m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 19m                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     19m                  kubelet          Node ha-487903 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    19m                  kubelet          Node ha-487903 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  19m                  kubelet          Node ha-487903 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           19m                  node-controller  Node ha-487903 event: Registered Node ha-487903 in Controller
	  Normal   NodeReady                19m                  kubelet          Node ha-487903 status is now: NodeReady
	  Normal   RegisteredNode           18m                  node-controller  Node ha-487903 event: Registered Node ha-487903 in Controller
	  Normal   RegisteredNode           16m                  node-controller  Node ha-487903 event: Registered Node ha-487903 in Controller
	  Normal   RegisteredNode           13m                  node-controller  Node ha-487903 event: Registered Node ha-487903 in Controller
	  Normal   Starting                 105s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  105s (x8 over 105s)  kubelet          Node ha-487903 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    105s (x8 over 105s)  kubelet          Node ha-487903 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     105s (x7 over 105s)  kubelet          Node ha-487903 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  105s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 74s                  kubelet          Node ha-487903 has been rebooted, boot id: 1b20db97-3ea3-483b-aa28-0753781928f2
	  Normal   RegisteredNode           68s                  node-controller  Node ha-487903 event: Registered Node ha-487903 in Controller
	  Normal   RegisteredNode           66s                  node-controller  Node ha-487903 event: Registered Node ha-487903 in Controller
	  Normal   RegisteredNode           30s                  node-controller  Node ha-487903 event: Registered Node ha-487903 in Controller
	
	
	Name:               ha-487903-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-487903-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=ha-487903
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_19T22_49_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:49:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-487903-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 23:07:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 23:07:06 +0000   Wed, 19 Nov 2025 22:54:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 23:07:06 +0000   Wed, 19 Nov 2025 22:54:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 23:07:06 +0000   Wed, 19 Nov 2025 22:54:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 23:07:06 +0000   Wed, 19 Nov 2025 22:54:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.191
	  Hostname:    ha-487903-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 dcc51fc7a2ff40ae988dda36299d6bbc
	  System UUID:                dcc51fc7-a2ff-40ae-988d-da36299d6bbc
	  Boot ID:                    6ad68891-6365-45be-8b40-3a4d3c73c34d
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-xjvfn                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 etcd-ha-487903-m02                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         18m
	  kube-system                 kindnet-9zx8x                            100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      18m
	  kube-system                 kube-apiserver-ha-487903-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-ha-487903-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-77wjf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-ha-487903-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-vip-ha-487903-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (5%)  50Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 18m                kube-proxy       
	  Normal   Starting                 61s                kube-proxy       
	  Normal   Starting                 13m                kube-proxy       
	  Normal   RegisteredNode           18m                node-controller  Node ha-487903-m02 event: Registered Node ha-487903-m02 in Controller
	  Normal   RegisteredNode           18m                node-controller  Node ha-487903-m02 event: Registered Node ha-487903-m02 in Controller
	  Normal   RegisteredNode           16m                node-controller  Node ha-487903-m02 event: Registered Node ha-487903-m02 in Controller
	  Normal   NodeNotReady             14m                node-controller  Node ha-487903-m02 status is now: NodeNotReady
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node ha-487903-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node ha-487903-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node ha-487903-m02 status is now: NodeHasNoDiskPressure
	  Warning  Rebooted                 13m                kubelet          Node ha-487903-m02 has been rebooted, boot id: e9c055dc-1db9-46bb-aebb-1872d4771aa9
	  Normal   RegisteredNode           13m                node-controller  Node ha-487903-m02 event: Registered Node ha-487903-m02 in Controller
	  Normal   Starting                 83s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  83s (x8 over 83s)  kubelet          Node ha-487903-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    83s (x8 over 83s)  kubelet          Node ha-487903-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     83s (x7 over 83s)  kubelet          Node ha-487903-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  83s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 69s                kubelet          Node ha-487903-m02 has been rebooted, boot id: 6ad68891-6365-45be-8b40-3a4d3c73c34d
	  Normal   RegisteredNode           68s                node-controller  Node ha-487903-m02 event: Registered Node ha-487903-m02 in Controller
	  Normal   RegisteredNode           66s                node-controller  Node ha-487903-m02 event: Registered Node ha-487903-m02 in Controller
	  Normal   RegisteredNode           30s                node-controller  Node ha-487903-m02 event: Registered Node ha-487903-m02 in Controller
	
	
	Name:               ha-487903-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-487903-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=ha-487903
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_19T22_50_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:50:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-487903-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 23:07:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 23:07:40 +0000   Wed, 19 Nov 2025 23:07:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 23:07:40 +0000   Wed, 19 Nov 2025 23:07:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 23:07:40 +0000   Wed, 19 Nov 2025 23:07:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 23:07:40 +0000   Wed, 19 Nov 2025 23:07:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.160
	  Hostname:    ha-487903-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 e9ddbb3bf8b54cd48c27cb1452f23fd2
	  System UUID:                e9ddbb3b-f8b5-4cd4-8c27-cb1452f23fd2
	  Boot ID:                    ebee6c5a-099c-4845-b6bc-e5686cb73f0c
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-6q5gq                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 etcd-ha-487903-m03                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         16m
	  kube-system                 kindnet-kslhw                            100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      16m
	  kube-system                 kube-apiserver-ha-487903-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-487903-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-tkx9r                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-487903-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-487903-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (5%)  50Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 16m                kube-proxy       
	  Normal   Starting                 26s                kube-proxy       
	  Normal   RegisteredNode           16m                node-controller  Node ha-487903-m03 event: Registered Node ha-487903-m03 in Controller
	  Normal   RegisteredNode           16m                node-controller  Node ha-487903-m03 event: Registered Node ha-487903-m03 in Controller
	  Normal   RegisteredNode           16m                node-controller  Node ha-487903-m03 event: Registered Node ha-487903-m03 in Controller
	  Normal   RegisteredNode           13m                node-controller  Node ha-487903-m03 event: Registered Node ha-487903-m03 in Controller
	  Normal   NodeNotReady             12m                node-controller  Node ha-487903-m03 status is now: NodeNotReady
	  Normal   RegisteredNode           68s                node-controller  Node ha-487903-m03 event: Registered Node ha-487903-m03 in Controller
	  Normal   RegisteredNode           66s                node-controller  Node ha-487903-m03 event: Registered Node ha-487903-m03 in Controller
	  Normal   Starting                 48s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  48s (x8 over 48s)  kubelet          Node ha-487903-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    48s (x8 over 48s)  kubelet          Node ha-487903-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     48s (x7 over 48s)  kubelet          Node ha-487903-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  48s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 36s                kubelet          Node ha-487903-m03 has been rebooted, boot id: ebee6c5a-099c-4845-b6bc-e5686cb73f0c
	  Normal   RegisteredNode           30s                node-controller  Node ha-487903-m03 event: Registered Node ha-487903-m03 in Controller
	
	
	Name:               ha-487903-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-487903-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=ha-487903
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_19T22_51_56_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:51:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-487903-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 23:07:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 23:07:42 +0000   Wed, 19 Nov 2025 23:07:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 23:07:42 +0000   Wed, 19 Nov 2025 23:07:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 23:07:42 +0000   Wed, 19 Nov 2025 23:07:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 23:07:42 +0000   Wed, 19 Nov 2025 23:07:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.187
	  Hostname:    ha-487903-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 2ce148a1b98246f6ada06a5a5b14ddce
	  System UUID:                2ce148a1-b982-46f6-ada0-6a5a5b14ddce
	  Boot ID:                    7878c528-f6af-4234-946e-b1c55c0ff956
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-s9k2l       100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      15m
	  kube-system                 kube-proxy-zxtk6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (1%)  50Mi (1%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 8s                 kube-proxy       
	  Normal   Starting                 15m                kube-proxy       
	  Normal   NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     15m (x3 over 15m)  kubelet          Node ha-487903-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    15m (x3 over 15m)  kubelet          Node ha-487903-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  15m (x3 over 15m)  kubelet          Node ha-487903-m04 status is now: NodeHasSufficientMemory
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           15m                node-controller  Node ha-487903-m04 event: Registered Node ha-487903-m04 in Controller
	  Normal   RegisteredNode           15m                node-controller  Node ha-487903-m04 event: Registered Node ha-487903-m04 in Controller
	  Normal   RegisteredNode           15m                node-controller  Node ha-487903-m04 event: Registered Node ha-487903-m04 in Controller
	  Normal   NodeReady                15m                kubelet          Node ha-487903-m04 status is now: NodeReady
	  Normal   RegisteredNode           13m                node-controller  Node ha-487903-m04 event: Registered Node ha-487903-m04 in Controller
	  Normal   NodeNotReady             12m                node-controller  Node ha-487903-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           68s                node-controller  Node ha-487903-m04 event: Registered Node ha-487903-m04 in Controller
	  Normal   RegisteredNode           66s                node-controller  Node ha-487903-m04 event: Registered Node ha-487903-m04 in Controller
	  Normal   RegisteredNode           30s                node-controller  Node ha-487903-m04 event: Registered Node ha-487903-m04 in Controller
	  Normal   Starting                 13s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 13s                kubelet          Node ha-487903-m04 has been rebooted, boot id: 7878c528-f6af-4234-946e-b1c55c0ff956
	  Normal   NodeHasSufficientMemory  13s (x4 over 13s)  kubelet          Node ha-487903-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13s (x4 over 13s)  kubelet          Node ha-487903-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13s (x4 over 13s)  kubelet          Node ha-487903-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             13s                kubelet          Node ha-487903-m04 status is now: NodeNotReady
	  Normal   NodeReady                13s (x2 over 13s)  kubelet          Node ha-487903-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov19 23:05] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[Nov19 23:06] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000639] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.971469] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.112003] kauditd_printk_skb: 93 callbacks suppressed
	[ +23.563071] kauditd_printk_skb: 193 callbacks suppressed
	[  +9.425091] kauditd_printk_skb: 6 callbacks suppressed
	[  +3.746118] kauditd_printk_skb: 281 callbacks suppressed
	[Nov19 23:07] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [361486fad16d169560efd4b5ba18c9754dd222d05d290599faad7b9f7ef4862e] <==
	{"level":"warn","ts":"2025-11-19T23:07:06.203115Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.39.160:2380/version","remote-member-id":"12a2eca89caa6ef","error":"Get \"https://192.168.39.160:2380/version\": dial tcp 192.168.39.160:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-19T23:07:06.203177Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"12a2eca89caa6ef","error":"Get \"https://192.168.39.160:2380/version\": dial tcp 192.168.39.160:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-19T23:07:07.487952Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"12a2eca89caa6ef","rtt":"0s","error":"dial tcp 192.168.39.160:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-19T23:07:07.489213Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"12a2eca89caa6ef","rtt":"0s","error":"dial tcp 192.168.39.160:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-19T23:07:10.205469Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.39.160:2380/version","remote-member-id":"12a2eca89caa6ef","error":"Get \"https://192.168.39.160:2380/version\": dial tcp 192.168.39.160:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-19T23:07:10.205597Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"12a2eca89caa6ef","error":"Get \"https://192.168.39.160:2380/version\": dial tcp 192.168.39.160:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-19T23:07:12.489039Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"12a2eca89caa6ef","rtt":"0s","error":"dial tcp 192.168.39.160:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-19T23:07:12.490337Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"12a2eca89caa6ef","rtt":"0s","error":"dial tcp 192.168.39.160:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-19T23:07:14.207287Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.39.160:2380/version","remote-member-id":"12a2eca89caa6ef","error":"Get \"https://192.168.39.160:2380/version\": dial tcp 192.168.39.160:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-19T23:07:14.207351Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"12a2eca89caa6ef","error":"Get \"https://192.168.39.160:2380/version\": dial tcp 192.168.39.160:2380: connect: connection refused"}
	{"level":"info","ts":"2025-11-19T23:07:16.117696Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aadd773bb1fe5a6f","to":"12a2eca89caa6ef","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-11-19T23:07:16.117815Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"12a2eca89caa6ef"}
	{"level":"info","ts":"2025-11-19T23:07:16.118135Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aadd773bb1fe5a6f","remote-peer-id":"12a2eca89caa6ef"}
	{"level":"info","ts":"2025-11-19T23:07:16.119339Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aadd773bb1fe5a6f","to":"12a2eca89caa6ef","stream-type":"stream Message"}
	{"level":"info","ts":"2025-11-19T23:07:16.120059Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aadd773bb1fe5a6f","remote-peer-id":"12a2eca89caa6ef"}
	{"level":"info","ts":"2025-11-19T23:07:16.137587Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aadd773bb1fe5a6f","remote-peer-id":"12a2eca89caa6ef"}
	{"level":"info","ts":"2025-11-19T23:07:16.139573Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aadd773bb1fe5a6f","remote-peer-id":"12a2eca89caa6ef"}
	{"level":"warn","ts":"2025-11-19T23:07:17.440682Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"191.54871ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-19T23:07:17.440904Z","caller":"traceutil/trace.go:172","msg":"trace[1753473238] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2450; }","duration":"191.78252ms","start":"2025-11-19T23:07:17.249092Z","end":"2025-11-19T23:07:17.440875Z","steps":["trace[1753473238] 'agreement among raft nodes before linearized reading'  (duration: 71.313203ms)","trace[1753473238] 'range keys from in-memory index tree'  (duration: 120.155077ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T23:07:17.441401Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.503606ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8674384362276629839 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.191\" mod_revision:2420 > success:<request_put:<key:\"/registry/masterleases/192.168.39.191\" value_size:67 lease:8674384362276629837 >> failure:<>>","response":"size:16"}
	{"level":"warn","ts":"2025-11-19T23:07:48.542069Z","caller":"etcdserver/raft.go:387","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"48ac9f57fd1b7861","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"10.845349ms"}
	{"level":"warn","ts":"2025-11-19T23:07:48.542343Z","caller":"etcdserver/raft.go:387","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"12a2eca89caa6ef","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"11.124624ms"}
	{"level":"info","ts":"2025-11-19T23:07:48.543856Z","caller":"traceutil/trace.go:172","msg":"trace[1319701574] linearizableReadLoop","detail":"{readStateIndex:3180; appliedIndex:3180; }","duration":"158.564762ms","start":"2025-11-19T23:07:48.385263Z","end":"2025-11-19T23:07:48.543828Z","steps":["trace[1319701574] 'read index received'  (duration: 158.558852ms)","trace[1319701574] 'applied index is now lower than readState.Index'  (duration: 4.531ยตs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T23:07:48.545623Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"160.332943ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-19T23:07:48.545695Z","caller":"traceutil/trace.go:172","msg":"trace[1464022155] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2660; }","duration":"160.440859ms","start":"2025-11-19T23:07:48.385237Z","end":"2025-11-19T23:07:48.545678Z","steps":["trace[1464022155] 'agreement among raft nodes before linearized reading'  (duration: 158.944389ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:07:55 up 1 min,  0 users,  load average: 0.70, 0.30, 0.11
	Linux ha-487903 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 21:15:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kindnet [671e74cfb90edb492f69d2062ab55e74926857b9da6b2bd43dc676ca77a34e95] <==
	I1119 23:07:25.551339       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.160 Flags: [] Table: 0 Realm: 0} 
	I1119 23:07:25.551569       1 main.go:297] Handling node with IPs: map[192.168.39.187:{}]
	I1119 23:07:25.551605       1 main.go:324] Node ha-487903-m04 has CIDR [10.244.3.0/24] 
	I1119 23:07:25.551717       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.39.187 Flags: [] Table: 0 Realm: 0} 
	I1119 23:07:25.551968       1 main.go:297] Handling node with IPs: map[192.168.39.15:{}]
	I1119 23:07:25.551979       1 main.go:301] handling current node
	I1119 23:07:25.555969       1 main.go:297] Handling node with IPs: map[192.168.39.191:{}]
	I1119 23:07:25.555998       1 main.go:324] Node ha-487903-m02 has CIDR [10.244.1.0/24] 
	I1119 23:07:25.556125       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.191 Flags: [] Table: 0 Realm: 0} 
	I1119 23:07:35.550785       1 main.go:297] Handling node with IPs: map[192.168.39.15:{}]
	I1119 23:07:35.551328       1 main.go:301] handling current node
	I1119 23:07:35.551482       1 main.go:297] Handling node with IPs: map[192.168.39.191:{}]
	I1119 23:07:35.551569       1 main.go:324] Node ha-487903-m02 has CIDR [10.244.1.0/24] 
	I1119 23:07:35.552075       1 main.go:297] Handling node with IPs: map[192.168.39.160:{}]
	I1119 23:07:35.552137       1 main.go:324] Node ha-487903-m03 has CIDR [10.244.2.0/24] 
	I1119 23:07:35.553487       1 main.go:297] Handling node with IPs: map[192.168.39.187:{}]
	I1119 23:07:35.553607       1 main.go:324] Node ha-487903-m04 has CIDR [10.244.3.0/24] 
	I1119 23:07:45.613812       1 main.go:297] Handling node with IPs: map[192.168.39.15:{}]
	I1119 23:07:45.613849       1 main.go:301] handling current node
	I1119 23:07:45.613870       1 main.go:297] Handling node with IPs: map[192.168.39.191:{}]
	I1119 23:07:45.613875       1 main.go:324] Node ha-487903-m02 has CIDR [10.244.1.0/24] 
	I1119 23:07:45.614056       1 main.go:297] Handling node with IPs: map[192.168.39.160:{}]
	I1119 23:07:45.614062       1 main.go:324] Node ha-487903-m03 has CIDR [10.244.2.0/24] 
	I1119 23:07:45.614196       1 main.go:297] Handling node with IPs: map[192.168.39.187:{}]
	I1119 23:07:45.614205       1 main.go:324] Node ha-487903-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [0a3ebfa791420fd99d627281216572fc41635c59a2e823681f10e0265a5601af] <==
	I1119 23:06:41.578069       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1119 23:06:41.578813       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1119 23:06:41.578895       1 policy_source.go:240] refreshing policies
	I1119 23:06:41.608674       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 23:06:41.652233       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1119 23:06:41.655394       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1119 23:06:41.655821       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1119 23:06:41.655850       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1119 23:06:41.656332       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1119 23:06:41.656371       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 23:06:41.656393       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 23:06:41.661422       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 23:06:41.661498       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1119 23:06:41.661574       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 23:06:41.678861       1 cache.go:39] Caches are synced for autoregister controller
	W1119 23:06:41.766283       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.191]
	I1119 23:06:41.770787       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 23:06:41.846071       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1119 23:06:41.851314       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1119 23:06:42.378977       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 23:06:42.473024       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1119 23:06:45.193304       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.15 192.168.39.191]
	I1119 23:06:47.599355       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 23:06:47.956548       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 23:06:50.470006       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-apiserver [37548c727f81afd8dd40f5cf29964a61c4e4370cbb603376f96287d018620cb4] <==
	I1119 23:06:11.880236       1 server.go:150] Version: v1.34.1
	I1119 23:06:11.880286       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1119 23:06:12.813039       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1119 23:06:12.813073       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1119 23:06:12.813086       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1119 23:06:12.813090       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1119 23:06:12.813094       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1119 23:06:12.813097       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1119 23:06:12.813101       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1119 23:06:12.813104       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1119 23:06:12.813108       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1119 23:06:12.813111       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1119 23:06:12.813114       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1119 23:06:12.813118       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1119 23:06:12.905211       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1119 23:06:12.913843       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1119 23:06:12.920093       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1119 23:06:12.966564       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1119 23:06:12.985714       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1119 23:06:12.985841       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1119 23:06:12.986449       1 instance.go:239] Using reconciler: lease
	W1119 23:06:12.991441       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1119 23:06:32.899983       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1119 23:06:32.912361       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1119 23:06:32.990473       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [407c1906949db307229dfea169cf035af7ee02aa72d4a48566dbe3b8c43189a6] <==
	I1119 23:06:47.634000       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 23:06:47.640819       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 23:06:47.645361       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 23:06:47.647492       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1119 23:06:47.648823       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1119 23:06:47.648946       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 23:06:47.649012       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 23:06:47.649925       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 23:06:47.650061       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 23:06:47.652900       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1119 23:06:47.653973       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 23:06:47.654043       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 23:06:47.654066       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 23:06:47.655286       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 23:06:47.658251       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 23:06:47.661337       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 23:06:47.661495       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 23:06:47.665057       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 23:06:47.668198       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 23:06:47.718631       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-487903-m04"
	I1119 23:06:47.722547       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-487903"
	I1119 23:06:47.722625       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-487903-m02"
	I1119 23:06:47.722698       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-487903-m03"
	I1119 23:06:47.725022       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1119 23:07:42.933678       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-487903-m04"
	
	
	==> kube-controller-manager [b7d9fc5b2567d02f5794d850ec1778030b3378c830c952d472803d0582d5904b] <==
	I1119 23:06:13.347369       1 serving.go:386] Generated self-signed cert in-memory
	I1119 23:06:14.236064       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1119 23:06:14.236118       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 23:06:14.241243       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1119 23:06:14.241453       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1119 23:06:14.242515       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1119 23:06:14.242958       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1119 23:06:41.727088       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[-]poststarthook/bootstrap-controller failed: reason withheld\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the reques
t from succeeding"
	
	
	==> kube-proxy [8e1ce69b078fd9d69a966ab3ae75b0a9a20f1716235f7fd123429ac0fe73bb0b] <==
	I1119 23:06:45.377032       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 23:06:45.478419       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 23:06:45.478668       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.15"]
	E1119 23:06:45.478924       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 23:06:45.554663       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1119 23:06:45.554766       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1119 23:06:45.554814       1 server_linux.go:132] "Using iptables Proxier"
	I1119 23:06:45.584249       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 23:06:45.586108       1 server.go:527] "Version info" version="v1.34.1"
	I1119 23:06:45.586390       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 23:06:45.595385       1 config.go:200] "Starting service config controller"
	I1119 23:06:45.595503       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 23:06:45.595536       1 config.go:106] "Starting endpoint slice config controller"
	I1119 23:06:45.595628       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 23:06:45.595660       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 23:06:45.595795       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 23:06:45.601653       1 config.go:309] "Starting node config controller"
	I1119 23:06:45.601683       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 23:06:45.601692       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 23:06:45.697008       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 23:06:45.701060       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 23:06:45.701074       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [fead33c061a4deb0b1eb4ee9dd3e9e724dade2871a97a7aad79bef05acbd4a07] <==
	I1119 23:06:14.220668       1 serving.go:386] Generated self-signed cert in-memory
	W1119 23:06:24.867573       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.39.15:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W1119 23:06:24.867603       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1119 23:06:24.867609       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1119 23:06:41.527454       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 23:06:41.527518       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 23:06:41.550229       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 23:06:41.550314       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 23:06:41.551802       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 23:06:41.551954       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 23:06:41.651239       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 23:06:42 ha-487903 kubelet[1174]: I1119 23:06:42.450466    1174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8743ca8a-c5e5-4da6-a983-a6191d2a852a-xtables-lock\") pod \"kube-proxy-fk7mh\" (UID: \"8743ca8a-c5e5-4da6-a983-a6191d2a852a\") " pod="kube-system/kube-proxy-fk7mh"
	Nov 19 23:06:42 ha-487903 kubelet[1174]: I1119 23:06:42.451469    1174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1dd7683b-c7e7-487c-904a-506a24f833d8-cni-cfg\") pod \"kindnet-p9nqh\" (UID: \"1dd7683b-c7e7-487c-904a-506a24f833d8\") " pod="kube-system/kindnet-p9nqh"
	Nov 19 23:06:42 ha-487903 kubelet[1174]: I1119 23:06:42.451615    1174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8743ca8a-c5e5-4da6-a983-a6191d2a852a-lib-modules\") pod \"kube-proxy-fk7mh\" (UID: \"8743ca8a-c5e5-4da6-a983-a6191d2a852a\") " pod="kube-system/kube-proxy-fk7mh"
	Nov 19 23:06:42 ha-487903 kubelet[1174]: I1119 23:06:42.624851    1174 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-487903" podStartSLOduration=0.624815206 podStartE2EDuration="624.815206ms" podCreationTimestamp="2025-11-19 23:06:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 23:06:42.589949403 +0000 UTC m=+32.475007265" watchObservedRunningTime="2025-11-19 23:06:42.624815206 +0000 UTC m=+32.509873072"
	Nov 19 23:06:42 ha-487903 kubelet[1174]: I1119 23:06:42.730458    1174 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-487903" podUID="1763d9e3-0be9-49f4-8f8a-a7a938a03e79"
	Nov 19 23:06:42 ha-487903 kubelet[1174]: I1119 23:06:42.737037    1174 scope.go:117] "RemoveContainer" containerID="b7d9fc5b2567d02f5794d850ec1778030b3378c830c952d472803d0582d5904b"
	Nov 19 23:06:50 ha-487903 kubelet[1174]: E1119 23:06:50.419529    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593610418393085  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:06:50 ha-487903 kubelet[1174]: E1119 23:06:50.419553    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593610418393085  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:00 ha-487903 kubelet[1174]: E1119 23:07:00.425327    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593620423807836  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:00 ha-487903 kubelet[1174]: E1119 23:07:00.425367    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593620423807836  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:10 ha-487903 kubelet[1174]: E1119 23:07:10.401322    1174 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc3ec72a905f9e317c80a67def712b35b88b16b6a1791a1918f70fbaa4461fdb\": container with ID starting with dc3ec72a905f9e317c80a67def712b35b88b16b6a1791a1918f70fbaa4461fdb not found: ID does not exist" containerID="dc3ec72a905f9e317c80a67def712b35b88b16b6a1791a1918f70fbaa4461fdb"
	Nov 19 23:07:10 ha-487903 kubelet[1174]: I1119 23:07:10.401423    1174 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="dc3ec72a905f9e317c80a67def712b35b88b16b6a1791a1918f70fbaa4461fdb" err="rpc error: code = NotFound desc = could not find container \"dc3ec72a905f9e317c80a67def712b35b88b16b6a1791a1918f70fbaa4461fdb\": container with ID starting with dc3ec72a905f9e317c80a67def712b35b88b16b6a1791a1918f70fbaa4461fdb not found: ID does not exist"
	Nov 19 23:07:10 ha-487903 kubelet[1174]: E1119 23:07:10.403444    1174 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0d0a7a11d725bfd7eb80cce55e48041fda68d8d4cb626fb175c6f11cb7b751e\": container with ID starting with e0d0a7a11d725bfd7eb80cce55e48041fda68d8d4cb626fb175c6f11cb7b751e not found: ID does not exist" containerID="e0d0a7a11d725bfd7eb80cce55e48041fda68d8d4cb626fb175c6f11cb7b751e"
	Nov 19 23:07:10 ha-487903 kubelet[1174]: I1119 23:07:10.403487    1174 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="e0d0a7a11d725bfd7eb80cce55e48041fda68d8d4cb626fb175c6f11cb7b751e" err="rpc error: code = NotFound desc = could not find container \"e0d0a7a11d725bfd7eb80cce55e48041fda68d8d4cb626fb175c6f11cb7b751e\": container with ID starting with e0d0a7a11d725bfd7eb80cce55e48041fda68d8d4cb626fb175c6f11cb7b751e not found: ID does not exist"
	Nov 19 23:07:10 ha-487903 kubelet[1174]: E1119 23:07:10.430603    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593630428472038  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:10 ha-487903 kubelet[1174]: E1119 23:07:10.430660    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593630428472038  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:16 ha-487903 kubelet[1174]: I1119 23:07:16.059285    1174 scope.go:117] "RemoveContainer" containerID="f4db302f8e1d90f14b4a4931bf9263114ec245c8993010f4eed63ba0b2ff9d17"
	Nov 19 23:07:20 ha-487903 kubelet[1174]: E1119 23:07:20.435116    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593640433962521  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:20 ha-487903 kubelet[1174]: E1119 23:07:20.435143    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593640433962521  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:30 ha-487903 kubelet[1174]: E1119 23:07:30.443024    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593650441380322  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:30 ha-487903 kubelet[1174]: E1119 23:07:30.443098    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593650441380322  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:40 ha-487903 kubelet[1174]: E1119 23:07:40.446103    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593660445234723  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:40 ha-487903 kubelet[1174]: E1119 23:07:40.446443    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593660445234723  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:50 ha-487903 kubelet[1174]: E1119 23:07:50.450547    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593670449149802  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:50 ha-487903 kubelet[1174]: E1119 23:07:50.450679    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593670449149802  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-487903 -n ha-487903
helpers_test.go:269: (dbg) Run:  kubectl --context ha-487903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (3.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (81.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-487903 node add --control-plane --alsologtostderr -v 5: (1m17.833683818s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 status --alsologtostderr -v 5
ha_test.go:618: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-487903 status --alsologtostderr -v 5": ha-487903
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-487903-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-487903-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-487903-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-487903-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:621: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-487903 status --alsologtostderr -v 5": ha-487903
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-487903-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-487903-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-487903-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-487903-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:624: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-487903 status --alsologtostderr -v 5": ha-487903
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-487903-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-487903-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-487903-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-487903-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:627: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-487903 status --alsologtostderr -v 5": ha-487903
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-487903-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-487903-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-487903-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-487903-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-487903 -n ha-487903
helpers_test.go:252: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-487903 logs -n 25: (1.862266925s)
helpers_test.go:260: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
	โ”‚ COMMAND โ”‚                                                                ARGS                                                                 โ”‚  PROFILE  โ”‚  USER   โ”‚ VERSION โ”‚     START TIME      โ”‚      END TIME       โ”‚
	โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m03 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test_ha-487903-m03_ha-487903-m04.txt                                        โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp testdata/cp-test.txt ha-487903-m04:/home/docker/cp-test.txt                                                            โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile651617511/001/cp-test_ha-487903-m04.txt โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m04:/home/docker/cp-test.txt ha-487903:/home/docker/cp-test_ha-487903-m04_ha-487903.txt                      โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903 sudo cat /home/docker/cp-test_ha-487903-m04_ha-487903.txt                                                โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m04:/home/docker/cp-test.txt ha-487903-m02:/home/docker/cp-test_ha-487903-m04_ha-487903-m02.txt              โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m02 sudo cat /home/docker/cp-test_ha-487903-m04_ha-487903-m02.txt                                        โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m04:/home/docker/cp-test.txt ha-487903-m03:/home/docker/cp-test_ha-487903-m04_ha-487903-m03.txt              โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m03 sudo cat /home/docker/cp-test_ha-487903-m04_ha-487903-m03.txt                                        โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ node    โ”‚ ha-487903 node stop m02 --alsologtostderr -v 5                                                                                      โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:53 UTC โ”‚
	โ”‚ node    โ”‚ ha-487903 node start m02 --alsologtostderr -v 5                                                                                     โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:53 UTC โ”‚ 19 Nov 25 22:54 UTC โ”‚
	โ”‚ node    โ”‚ ha-487903 node list --alsologtostderr -v 5                                                                                          โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:54 UTC โ”‚                     โ”‚
	โ”‚ stop    โ”‚ ha-487903 stop --alsologtostderr -v 5                                                                                               โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:54 UTC โ”‚ 19 Nov 25 22:58 UTC โ”‚
	โ”‚ start   โ”‚ ha-487903 start --wait true --alsologtostderr -v 5                                                                                  โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:58 UTC โ”‚                     โ”‚
	โ”‚ node    โ”‚ ha-487903 node list --alsologtostderr -v 5                                                                                          โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:05 UTC โ”‚                     โ”‚
	โ”‚ node    โ”‚ ha-487903 node delete m03 --alsologtostderr -v 5                                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:05 UTC โ”‚                     โ”‚
	โ”‚ stop    โ”‚ ha-487903 stop --alsologtostderr -v 5                                                                                               โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:05 UTC โ”‚ 19 Nov 25 23:05 UTC โ”‚
	โ”‚ start   โ”‚ ha-487903 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio                                          โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:05 UTC โ”‚ 19 Nov 25 23:07 UTC โ”‚
	โ”‚ node    โ”‚ ha-487903 node add --control-plane --alsologtostderr -v 5                                                                           โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:07 UTC โ”‚ 19 Nov 25 23:09 UTC โ”‚
	โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 23:05:52
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 23:05:52.706176  142733 out.go:360] Setting OutFile to fd 1 ...
	I1119 23:05:52.706327  142733 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:05:52.706339  142733 out.go:374] Setting ErrFile to fd 2...
	I1119 23:05:52.706345  142733 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:05:52.706585  142733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-117497/.minikube/bin
	I1119 23:05:52.707065  142733 out.go:368] Setting JSON to false
	I1119 23:05:52.708054  142733 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":17300,"bootTime":1763576253,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 23:05:52.708149  142733 start.go:143] virtualization: kvm guest
	I1119 23:05:52.710481  142733 out.go:179] * [ha-487903] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 23:05:52.712209  142733 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 23:05:52.712212  142733 notify.go:221] Checking for updates...
	I1119 23:05:52.713784  142733 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 23:05:52.715651  142733 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 23:05:52.717169  142733 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-117497/.minikube
	I1119 23:05:52.718570  142733 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 23:05:52.719907  142733 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 23:05:52.721783  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:05:52.722291  142733 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 23:05:52.757619  142733 out.go:179] * Using the kvm2 driver based on existing profile
	I1119 23:05:52.759046  142733 start.go:309] selected driver: kvm2
	I1119 23:05:52.759059  142733 start.go:930] validating driver "kvm2" against &{Name:ha-487903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:fal
se default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 23:05:52.759205  142733 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 23:05:52.760143  142733 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 23:05:52.760174  142733 cni.go:84] Creating CNI manager for ""
	I1119 23:05:52.760222  142733 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1119 23:05:52.760262  142733 start.go:353] cluster config:
	{Name:ha-487903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:f
alse headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 23:05:52.760375  142733 iso.go:125] acquiring lock: {Name:mk95e5a645bfae75190ef550e02bd4f48b331040 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 23:05:52.762211  142733 out.go:179] * Starting "ha-487903" primary control-plane node in "ha-487903" cluster
	I1119 23:05:52.763538  142733 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:05:52.763567  142733 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 23:05:52.763575  142733 cache.go:65] Caching tarball of preloaded images
	I1119 23:05:52.763673  142733 preload.go:238] Found /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 23:05:52.763683  142733 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 23:05:52.763787  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:05:52.763996  142733 start.go:360] acquireMachinesLock for ha-487903: {Name:mk31ae761a53f3900bcf08a8c89eb1fc69e23fe3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1119 23:05:52.764045  142733 start.go:364] duration metric: took 30.713ยตs to acquireMachinesLock for "ha-487903"
	I1119 23:05:52.764058  142733 start.go:96] Skipping create...Using existing machine configuration
	I1119 23:05:52.764066  142733 fix.go:54] fixHost starting: 
	I1119 23:05:52.765697  142733 fix.go:112] recreateIfNeeded on ha-487903: state=Stopped err=<nil>
	W1119 23:05:52.765728  142733 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 23:05:52.767327  142733 out.go:252] * Restarting existing kvm2 VM for "ha-487903" ...
	I1119 23:05:52.767364  142733 main.go:143] libmachine: starting domain...
	I1119 23:05:52.767374  142733 main.go:143] libmachine: ensuring networks are active...
	I1119 23:05:52.768372  142733 main.go:143] libmachine: Ensuring network default is active
	I1119 23:05:52.768788  142733 main.go:143] libmachine: Ensuring network mk-ha-487903 is active
	I1119 23:05:52.769282  142733 main.go:143] libmachine: getting domain XML...
	I1119 23:05:52.770421  142733 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>ha-487903</name>
	  <uuid>a1ad91e9-9cee-4f2a-89ce-da034e4410c0</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/ha-487903.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:a9:81:53'/>
	      <source network='mk-ha-487903'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:93:d5:3e'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1119 23:05:54.042651  142733 main.go:143] libmachine: waiting for domain to start...
	I1119 23:05:54.044244  142733 main.go:143] libmachine: domain is now running
	I1119 23:05:54.044267  142733 main.go:143] libmachine: waiting for IP...
	I1119 23:05:54.045198  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:05:54.045704  142733 main.go:143] libmachine: domain ha-487903 has current primary IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:05:54.045724  142733 main.go:143] libmachine: found domain IP: 192.168.39.15
	I1119 23:05:54.045732  142733 main.go:143] libmachine: reserving static IP address...
	I1119 23:05:54.046222  142733 main.go:143] libmachine: found host DHCP lease matching {name: "ha-487903", mac: "52:54:00:a9:81:53", ip: "192.168.39.15"} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:05:54.046258  142733 main.go:143] libmachine: skip adding static IP to network mk-ha-487903 - found existing host DHCP lease matching {name: "ha-487903", mac: "52:54:00:a9:81:53", ip: "192.168.39.15"}
	I1119 23:05:54.046271  142733 main.go:143] libmachine: reserved static IP address 192.168.39.15 for domain ha-487903
	I1119 23:05:54.046295  142733 main.go:143] libmachine: waiting for SSH...
	I1119 23:05:54.046303  142733 main.go:143] libmachine: Getting to WaitForSSH function...
	I1119 23:05:54.048860  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:05:54.049341  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:05:54.049374  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:05:54.049568  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:05:54.049870  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 23:05:54.049901  142733 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1119 23:05:57.100181  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.15:22: connect: no route to host
	I1119 23:06:03.180312  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.15:22: connect: no route to host
	I1119 23:06:06.296535  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:06:06.299953  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.300441  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:06.300473  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.300784  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:06:06.301022  142733 machine.go:94] provisionDockerMachine start ...
	I1119 23:06:06.303559  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.303988  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:06.304019  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.304170  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:06.304355  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 23:06:06.304365  142733 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 23:06:06.427246  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1119 23:06:06.427299  142733 buildroot.go:166] provisioning hostname "ha-487903"
	I1119 23:06:06.430382  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.430835  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:06.430864  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.431166  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:06.431461  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 23:06:06.431480  142733 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-487903 && echo "ha-487903" | sudo tee /etc/hostname
	I1119 23:06:06.561698  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-487903
	
	I1119 23:06:06.564714  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.565207  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:06.565235  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.565469  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:06.565702  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 23:06:06.565719  142733 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-487903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-487903/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-487903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 23:06:06.681480  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:06:06.681508  142733 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21918-117497/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-117497/.minikube}
	I1119 23:06:06.681543  142733 buildroot.go:174] setting up certificates
	I1119 23:06:06.681552  142733 provision.go:84] configureAuth start
	I1119 23:06:06.685338  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.685816  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:06.685842  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.688699  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.689140  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:06.689164  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.689319  142733 provision.go:143] copyHostCerts
	I1119 23:06:06.689357  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:06:06.689414  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem, removing ...
	I1119 23:06:06.689445  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:06:06.689527  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem (1078 bytes)
	I1119 23:06:06.689624  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:06:06.689643  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem, removing ...
	I1119 23:06:06.689649  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:06:06.689677  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem (1123 bytes)
	I1119 23:06:06.689736  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:06:06.689753  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem, removing ...
	I1119 23:06:06.689759  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:06:06.689781  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem (1675 bytes)
	I1119 23:06:06.689843  142733 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem org=jenkins.ha-487903 san=[127.0.0.1 192.168.39.15 ha-487903 localhost minikube]
	I1119 23:06:07.018507  142733 provision.go:177] copyRemoteCerts
	I1119 23:06:07.018578  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 23:06:07.021615  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.022141  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:07.022166  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.022358  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:06:07.124817  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1119 23:06:07.124927  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 23:06:07.158158  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1119 23:06:07.158263  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1119 23:06:07.190088  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1119 23:06:07.190169  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 23:06:07.222689  142733 provision.go:87] duration metric: took 541.123395ms to configureAuth
	I1119 23:06:07.222718  142733 buildroot.go:189] setting minikube options for container-runtime
	I1119 23:06:07.222970  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:07.226056  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.226580  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:07.226611  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.226826  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:07.227127  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 23:06:07.227155  142733 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 23:06:07.467444  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 23:06:07.467474  142733 machine.go:97] duration metric: took 1.166437022s to provisionDockerMachine
	I1119 23:06:07.467487  142733 start.go:293] postStartSetup for "ha-487903" (driver="kvm2")
	I1119 23:06:07.467497  142733 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 23:06:07.467573  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 23:06:07.470835  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.471406  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:07.471439  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.471649  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:06:07.557470  142733 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 23:06:07.562862  142733 info.go:137] Remote host: Buildroot 2025.02
	I1119 23:06:07.562927  142733 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/addons for local assets ...
	I1119 23:06:07.563034  142733 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/files for local assets ...
	I1119 23:06:07.563138  142733 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> 1213692.pem in /etc/ssl/certs
	I1119 23:06:07.563154  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /etc/ssl/certs/1213692.pem
	I1119 23:06:07.563287  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 23:06:07.576076  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:06:07.609515  142733 start.go:296] duration metric: took 142.008328ms for postStartSetup
	I1119 23:06:07.609630  142733 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1119 23:06:07.612430  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.612824  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:07.612846  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.613026  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:06:07.696390  142733 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I1119 23:06:07.696457  142733 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1119 23:06:07.760325  142733 fix.go:56] duration metric: took 14.99624586s for fixHost
	I1119 23:06:07.763696  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.764319  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:07.764358  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.764614  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:07.764948  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 23:06:07.764966  142733 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1119 23:06:07.879861  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763593567.838594342
	
	I1119 23:06:07.879914  142733 fix.go:216] guest clock: 1763593567.838594342
	I1119 23:06:07.879939  142733 fix.go:229] Guest: 2025-11-19 23:06:07.838594342 +0000 UTC Remote: 2025-11-19 23:06:07.760362222 +0000 UTC m=+15.104606371 (delta=78.23212ms)
	I1119 23:06:07.879965  142733 fix.go:200] guest clock delta is within tolerance: 78.23212ms
	I1119 23:06:07.879974  142733 start.go:83] releasing machines lock for "ha-487903", held for 15.115918319s
	I1119 23:06:07.882904  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.883336  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:07.883370  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.883966  142733 ssh_runner.go:195] Run: cat /version.json
	I1119 23:06:07.884051  142733 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 23:06:07.887096  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.887222  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.887583  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:07.887617  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.887792  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:07.887817  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.887816  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:06:07.888042  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:06:08.000713  142733 ssh_runner.go:195] Run: systemctl --version
	I1119 23:06:08.008530  142733 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 23:06:08.160324  142733 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 23:06:08.168067  142733 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 23:06:08.168152  142733 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 23:06:08.191266  142733 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 23:06:08.191300  142733 start.go:496] detecting cgroup driver to use...
	I1119 23:06:08.191379  142733 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 23:06:08.213137  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 23:06:08.230996  142733 docker.go:218] disabling cri-docker service (if available) ...
	I1119 23:06:08.231095  142733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 23:06:08.249013  142733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 23:06:08.265981  142733 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 23:06:08.414758  142733 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 23:06:08.622121  142733 docker.go:234] disabling docker service ...
	I1119 23:06:08.622209  142733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 23:06:08.639636  142733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 23:06:08.655102  142733 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 23:06:08.816483  142733 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 23:06:08.968104  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 23:06:08.984576  142733 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 23:06:09.008691  142733 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 23:06:09.008781  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:09.022146  142733 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 23:06:09.022232  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:09.035596  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:09.049670  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:09.063126  142733 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 23:06:09.077541  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:09.091115  142733 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:09.112968  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:09.126168  142733 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 23:06:09.137702  142733 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1119 23:06:09.137765  142733 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1119 23:06:09.176751  142733 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 23:06:09.191238  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:06:09.335526  142733 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 23:06:09.473011  142733 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 23:06:09.473116  142733 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 23:06:09.479113  142733 start.go:564] Will wait 60s for crictl version
	I1119 23:06:09.479189  142733 ssh_runner.go:195] Run: which crictl
	I1119 23:06:09.483647  142733 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1119 23:06:09.528056  142733 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1119 23:06:09.528131  142733 ssh_runner.go:195] Run: crio --version
	I1119 23:06:09.559995  142733 ssh_runner.go:195] Run: crio --version
	I1119 23:06:09.592672  142733 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1119 23:06:09.597124  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:09.597564  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:09.597590  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:09.597778  142733 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1119 23:06:09.602913  142733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:06:09.620048  142733 kubeadm.go:884] updating cluster {Name:ha-487903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stor
ageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 23:06:09.620196  142733 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:06:09.620243  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:06:09.674254  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:06:09.674279  142733 crio.go:433] Images already preloaded, skipping extraction
	I1119 23:06:09.674328  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:06:09.712016  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:06:09.712041  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:06:09.712058  142733 kubeadm.go:935] updating node { 192.168.39.15 8443 v1.34.1 crio true true} ...
	I1119 23:06:09.712184  142733 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-487903 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 23:06:09.712274  142733 ssh_runner.go:195] Run: crio config
	I1119 23:06:09.768708  142733 cni.go:84] Creating CNI manager for ""
	I1119 23:06:09.768732  142733 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1119 23:06:09.768752  142733 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 23:06:09.768773  142733 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.15 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-487903 NodeName:ha-487903 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 23:06:09.768939  142733 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-487903"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.15"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.15"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 23:06:09.768965  142733 kube-vip.go:115] generating kube-vip config ...
	I1119 23:06:09.769018  142733 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1119 23:06:09.795571  142733 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1119 23:06:09.795712  142733 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1119 23:06:09.795795  142733 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 23:06:09.812915  142733 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 23:06:09.812990  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1119 23:06:09.827102  142733 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1119 23:06:09.850609  142733 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 23:06:09.873695  142733 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1119 23:06:09.898415  142733 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1119 23:06:09.921905  142733 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1119 23:06:09.927238  142733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:06:09.944650  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:06:10.092858  142733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:06:10.131346  142733 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903 for IP: 192.168.39.15
	I1119 23:06:10.131374  142733 certs.go:195] generating shared ca certs ...
	I1119 23:06:10.131396  142733 certs.go:227] acquiring lock for ca certs: {Name:mk7fd1f1adfef6333505c39c1a982465562c820e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:06:10.131585  142733 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key
	I1119 23:06:10.131628  142733 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key
	I1119 23:06:10.131638  142733 certs.go:257] generating profile certs ...
	I1119 23:06:10.131709  142733 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key
	I1119 23:06:10.131766  142733 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key.4e6b2c30
	I1119 23:06:10.131799  142733 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key
	I1119 23:06:10.131811  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1119 23:06:10.131823  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1119 23:06:10.131835  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1119 23:06:10.131844  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1119 23:06:10.131857  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1119 23:06:10.131867  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1119 23:06:10.131905  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1119 23:06:10.131923  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1119 23:06:10.131976  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem (1338 bytes)
	W1119 23:06:10.132017  142733 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369_empty.pem, impossibly tiny 0 bytes
	I1119 23:06:10.132030  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 23:06:10.132063  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem (1078 bytes)
	I1119 23:06:10.132120  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem (1123 bytes)
	I1119 23:06:10.132148  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem (1675 bytes)
	I1119 23:06:10.132194  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:06:10.132221  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:06:10.132233  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem -> /usr/share/ca-certificates/121369.pem
	I1119 23:06:10.132244  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /usr/share/ca-certificates/1213692.pem
	I1119 23:06:10.132912  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 23:06:10.173830  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 23:06:10.215892  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 23:06:10.259103  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 23:06:10.294759  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 23:06:10.334934  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 23:06:10.388220  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 23:06:10.446365  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 23:06:10.481746  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 23:06:10.514956  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem --> /usr/share/ca-certificates/121369.pem (1338 bytes)
	I1119 23:06:10.547594  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /usr/share/ca-certificates/1213692.pem (1708 bytes)
	I1119 23:06:10.595613  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 23:06:10.619484  142733 ssh_runner.go:195] Run: openssl version
	I1119 23:06:10.626921  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121369.pem && ln -fs /usr/share/ca-certificates/121369.pem /etc/ssl/certs/121369.pem"
	I1119 23:06:10.641703  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121369.pem
	I1119 23:06:10.647634  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:54 /usr/share/ca-certificates/121369.pem
	I1119 23:06:10.647703  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121369.pem
	I1119 23:06:10.655724  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/121369.pem /etc/ssl/certs/51391683.0"
	I1119 23:06:10.670575  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1213692.pem && ln -fs /usr/share/ca-certificates/1213692.pem /etc/ssl/certs/1213692.pem"
	I1119 23:06:10.684630  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1213692.pem
	I1119 23:06:10.690618  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:54 /usr/share/ca-certificates/1213692.pem
	I1119 23:06:10.690694  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1213692.pem
	I1119 23:06:10.698531  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1213692.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 23:06:10.713731  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 23:06:10.729275  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:06:10.735204  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:06:10.735297  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:06:10.744718  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 23:06:10.760092  142733 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 23:06:10.765798  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 23:06:10.773791  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 23:06:10.781675  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 23:06:10.789835  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 23:06:10.797921  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 23:06:10.806330  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 23:06:10.814663  142733 kubeadm.go:401] StartCluster: {Name:ha-487903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Clust
erName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 23:06:10.814784  142733 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 23:06:10.814836  142733 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 23:06:10.862721  142733 cri.go:89] found id: ""
	I1119 23:06:10.862820  142733 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 23:06:10.906379  142733 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 23:06:10.906398  142733 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 23:06:10.906444  142733 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 23:06:10.937932  142733 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 23:06:10.938371  142733 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-487903" does not appear in /home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 23:06:10.938511  142733 kubeconfig.go:62] /home/jenkins/minikube-integration/21918-117497/kubeconfig needs updating (will repair): [kubeconfig missing "ha-487903" cluster setting kubeconfig missing "ha-487903" context setting]
	I1119 23:06:10.938761  142733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/kubeconfig: {Name:mk079b5c589536bd895a38d6eaf0adbccf891fc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:06:10.939284  142733 kapi.go:59] client config for ha-487903: &rest.Config{Host:"https://192.168.39.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key", CAFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1119 23:06:10.939703  142733 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1119 23:06:10.939720  142733 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1119 23:06:10.939727  142733 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1119 23:06:10.939732  142733 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1119 23:06:10.939737  142733 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1119 23:06:10.939800  142733 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1119 23:06:10.940217  142733 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 23:06:10.970469  142733 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.15
	I1119 23:06:10.970501  142733 kubeadm.go:602] duration metric: took 64.095819ms to restartPrimaryControlPlane
	I1119 23:06:10.970515  142733 kubeadm.go:403] duration metric: took 155.861263ms to StartCluster
	I1119 23:06:10.970538  142733 settings.go:142] acquiring lock: {Name:mk7bf46f049c1d627501587bc2954f8687f12cc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:06:10.970645  142733 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 23:06:10.971536  142733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/kubeconfig: {Name:mk079b5c589536bd895a38d6eaf0adbccf891fc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:06:10.971861  142733 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 23:06:10.971912  142733 start.go:242] waiting for startup goroutines ...
	I1119 23:06:10.971934  142733 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 23:06:10.972157  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:10.972266  142733 cache.go:107] acquiring lock: {Name:mk7073b8eca670d2c11dece9275947688dc7c859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 23:06:10.972332  142733 cache.go:115] /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1119 23:06:10.972347  142733 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 95.206ยตs
	I1119 23:06:10.972358  142733 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1119 23:06:10.972373  142733 cache.go:87] Successfully saved all images to host disk.
	I1119 23:06:10.972588  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:10.974762  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:06:10.975000  142733 out.go:179] * Enabled addons: 
	I1119 23:06:10.976397  142733 addons.go:515] duration metric: took 4.466316ms for enable addons: enabled=[]
	I1119 23:06:10.977405  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:10.977866  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:10.977902  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:10.978075  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:06:11.174757  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:06:11.174779  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:06:11.179357  142733 cache_images.go:264] succeeded pushing to: ha-487903
	I1119 23:06:11.179394  142733 start.go:247] waiting for cluster config update ...
	I1119 23:06:11.179405  142733 start.go:256] writing updated cluster config ...
	I1119 23:06:11.181383  142733 out.go:203] 
	I1119 23:06:11.182846  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:11.182976  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:06:11.184565  142733 out.go:179] * Starting "ha-487903-m02" control-plane node in "ha-487903" cluster
	I1119 23:06:11.185697  142733 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:06:11.185715  142733 cache.go:65] Caching tarball of preloaded images
	I1119 23:06:11.185830  142733 preload.go:238] Found /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 23:06:11.185845  142733 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 23:06:11.185991  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:06:11.186234  142733 start.go:360] acquireMachinesLock for ha-487903-m02: {Name:mk31ae761a53f3900bcf08a8c89eb1fc69e23fe3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1119 23:06:11.186285  142733 start.go:364] duration metric: took 28.134ยตs to acquireMachinesLock for "ha-487903-m02"
	I1119 23:06:11.186301  142733 start.go:96] Skipping create...Using existing machine configuration
	I1119 23:06:11.186314  142733 fix.go:54] fixHost starting: m02
	I1119 23:06:11.187948  142733 fix.go:112] recreateIfNeeded on ha-487903-m02: state=Stopped err=<nil>
	W1119 23:06:11.187969  142733 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 23:06:11.189608  142733 out.go:252] * Restarting existing kvm2 VM for "ha-487903-m02" ...
	I1119 23:06:11.189647  142733 main.go:143] libmachine: starting domain...
	I1119 23:06:11.189655  142733 main.go:143] libmachine: ensuring networks are active...
	I1119 23:06:11.190534  142733 main.go:143] libmachine: Ensuring network default is active
	I1119 23:06:11.190964  142733 main.go:143] libmachine: Ensuring network mk-ha-487903 is active
	I1119 23:06:11.191485  142733 main.go:143] libmachine: getting domain XML...
	I1119 23:06:11.192659  142733 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>ha-487903-m02</name>
	  <uuid>dcc51fc7-a2ff-40ae-988d-da36299d6bbc</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/ha-487903-m02.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:04:d5:70'/>
	      <source network='mk-ha-487903'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:9b:1d:f0'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1119 23:06:12.559560  142733 main.go:143] libmachine: waiting for domain to start...
	I1119 23:06:12.561198  142733 main.go:143] libmachine: domain is now running
	I1119 23:06:12.561220  142733 main.go:143] libmachine: waiting for IP...
	I1119 23:06:12.562111  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:12.562699  142733 main.go:143] libmachine: domain ha-487903-m02 has current primary IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:12.562715  142733 main.go:143] libmachine: found domain IP: 192.168.39.191
	I1119 23:06:12.562721  142733 main.go:143] libmachine: reserving static IP address...
	I1119 23:06:12.563203  142733 main.go:143] libmachine: found host DHCP lease matching {name: "ha-487903-m02", mac: "52:54:00:04:d5:70", ip: "192.168.39.191"} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:12.563229  142733 main.go:143] libmachine: skip adding static IP to network mk-ha-487903 - found existing host DHCP lease matching {name: "ha-487903-m02", mac: "52:54:00:04:d5:70", ip: "192.168.39.191"}
	I1119 23:06:12.563240  142733 main.go:143] libmachine: reserved static IP address 192.168.39.191 for domain ha-487903-m02
	I1119 23:06:12.563244  142733 main.go:143] libmachine: waiting for SSH...
	I1119 23:06:12.563250  142733 main.go:143] libmachine: Getting to WaitForSSH function...
	I1119 23:06:12.566254  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:12.566903  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:12.566943  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:12.567198  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:12.567490  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 23:06:12.567510  142733 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1119 23:06:15.660251  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I1119 23:06:21.740210  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I1119 23:06:24.742545  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: connection refused
	I1119 23:06:27.848690  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:06:27.852119  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:27.852581  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:27.852609  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:27.852840  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:06:27.853068  142733 machine.go:94] provisionDockerMachine start ...
	I1119 23:06:27.855169  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:27.855519  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:27.855541  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:27.855673  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:27.855857  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 23:06:27.855866  142733 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 23:06:27.961777  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1119 23:06:27.961813  142733 buildroot.go:166] provisioning hostname "ha-487903-m02"
	I1119 23:06:27.964686  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:27.965144  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:27.965168  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:27.965332  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:27.965514  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 23:06:27.965525  142733 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-487903-m02 && echo "ha-487903-m02" | sudo tee /etc/hostname
	I1119 23:06:28.090321  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-487903-m02
	
	I1119 23:06:28.093353  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.093734  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:28.093771  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.093968  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:28.094236  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 23:06:28.094259  142733 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-487903-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-487903-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-487903-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 23:06:28.210348  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:06:28.210378  142733 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21918-117497/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-117497/.minikube}
	I1119 23:06:28.210394  142733 buildroot.go:174] setting up certificates
	I1119 23:06:28.210406  142733 provision.go:84] configureAuth start
	I1119 23:06:28.213280  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.213787  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:28.213819  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.216188  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.216513  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:28.216537  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.216650  142733 provision.go:143] copyHostCerts
	I1119 23:06:28.216681  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:06:28.216719  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem, removing ...
	I1119 23:06:28.216731  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:06:28.216806  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem (1078 bytes)
	I1119 23:06:28.216924  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:06:28.216954  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem, removing ...
	I1119 23:06:28.216962  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:06:28.217011  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem (1123 bytes)
	I1119 23:06:28.217078  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:06:28.217105  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem, removing ...
	I1119 23:06:28.217114  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:06:28.217151  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem (1675 bytes)
	I1119 23:06:28.217219  142733 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem org=jenkins.ha-487903-m02 san=[127.0.0.1 192.168.39.191 ha-487903-m02 localhost minikube]
	I1119 23:06:28.306411  142733 provision.go:177] copyRemoteCerts
	I1119 23:06:28.306488  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 23:06:28.309423  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.309811  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:28.309838  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.309994  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:06:28.397995  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1119 23:06:28.398093  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 23:06:28.433333  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1119 23:06:28.433422  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1119 23:06:28.465202  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1119 23:06:28.465281  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 23:06:28.497619  142733 provision.go:87] duration metric: took 287.196846ms to configureAuth
	I1119 23:06:28.497657  142733 buildroot.go:189] setting minikube options for container-runtime
	I1119 23:06:28.497961  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:28.500692  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.501143  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:28.501166  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.501348  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:28.501530  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 23:06:28.501542  142733 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 23:06:28.756160  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 23:06:28.756188  142733 machine.go:97] duration metric: took 903.106737ms to provisionDockerMachine
	I1119 23:06:28.756199  142733 start.go:293] postStartSetup for "ha-487903-m02" (driver="kvm2")
	I1119 23:06:28.756221  142733 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 23:06:28.756309  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 23:06:28.759030  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.759384  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:28.759410  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.759547  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:06:28.845331  142733 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 23:06:28.850863  142733 info.go:137] Remote host: Buildroot 2025.02
	I1119 23:06:28.850908  142733 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/addons for local assets ...
	I1119 23:06:28.850968  142733 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/files for local assets ...
	I1119 23:06:28.851044  142733 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> 1213692.pem in /etc/ssl/certs
	I1119 23:06:28.851055  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /etc/ssl/certs/1213692.pem
	I1119 23:06:28.851135  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 23:06:28.863679  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:06:28.895369  142733 start.go:296] duration metric: took 139.152116ms for postStartSetup
	I1119 23:06:28.895468  142733 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1119 23:06:28.898332  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.898765  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:28.898790  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.898999  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:06:28.985599  142733 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I1119 23:06:28.985693  142733 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1119 23:06:29.047204  142733 fix.go:56] duration metric: took 17.860883759s for fixHost
	I1119 23:06:29.050226  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:29.050744  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:29.050767  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:29.050981  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:29.051235  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 23:06:29.051247  142733 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1119 23:06:29.170064  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763593589.134247097
	
	I1119 23:06:29.170097  142733 fix.go:216] guest clock: 1763593589.134247097
	I1119 23:06:29.170109  142733 fix.go:229] Guest: 2025-11-19 23:06:29.134247097 +0000 UTC Remote: 2025-11-19 23:06:29.047235815 +0000 UTC m=+36.391479959 (delta=87.011282ms)
	I1119 23:06:29.170136  142733 fix.go:200] guest clock delta is within tolerance: 87.011282ms
	I1119 23:06:29.170145  142733 start.go:83] releasing machines lock for "ha-487903-m02", held for 17.983849826s
	I1119 23:06:29.173173  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:29.173648  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:29.173674  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:29.175909  142733 out.go:179] * Found network options:
	I1119 23:06:29.177568  142733 out.go:179]   - NO_PROXY=192.168.39.15
	W1119 23:06:29.178760  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:06:29.179292  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	I1119 23:06:29.179397  142733 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 23:06:29.179416  142733 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 23:06:29.182546  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:29.182562  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:29.183004  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:29.183038  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:29.183140  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:29.183185  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:29.183194  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:06:29.183426  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:06:29.429918  142733 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 23:06:29.437545  142733 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 23:06:29.437605  142733 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 23:06:29.459815  142733 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 23:06:29.459846  142733 start.go:496] detecting cgroup driver to use...
	I1119 23:06:29.459981  142733 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 23:06:29.484636  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 23:06:29.506049  142733 docker.go:218] disabling cri-docker service (if available) ...
	I1119 23:06:29.506131  142733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 23:06:29.529159  142733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 23:06:29.547692  142733 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 23:06:29.709216  142733 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 23:06:29.933205  142733 docker.go:234] disabling docker service ...
	I1119 23:06:29.933271  142733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 23:06:29.951748  142733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 23:06:29.967973  142733 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 23:06:30.147148  142733 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 23:06:30.300004  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 23:06:30.316471  142733 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 23:06:30.341695  142733 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 23:06:30.341768  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:30.355246  142733 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 23:06:30.355313  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:30.368901  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:30.381931  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:30.395421  142733 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 23:06:30.410190  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:30.424532  142733 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:30.447910  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:30.462079  142733 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 23:06:30.473475  142733 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1119 23:06:30.473555  142733 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1119 23:06:30.495385  142733 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 23:06:30.507744  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:06:30.650555  142733 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 23:06:30.778126  142733 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 23:06:30.778224  142733 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 23:06:30.784440  142733 start.go:564] Will wait 60s for crictl version
	I1119 23:06:30.784509  142733 ssh_runner.go:195] Run: which crictl
	I1119 23:06:30.789036  142733 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1119 23:06:30.834259  142733 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1119 23:06:30.834368  142733 ssh_runner.go:195] Run: crio --version
	I1119 23:06:30.866387  142733 ssh_runner.go:195] Run: crio --version
	I1119 23:06:30.901524  142733 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1119 23:06:30.902829  142733 out.go:179]   - env NO_PROXY=192.168.39.15
	I1119 23:06:30.906521  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:30.906929  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:30.906948  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:30.907113  142733 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1119 23:06:30.912354  142733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:06:30.929641  142733 mustload.go:66] Loading cluster: ha-487903
	I1119 23:06:30.929929  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:30.931609  142733 host.go:66] Checking if "ha-487903" exists ...
	I1119 23:06:30.931865  142733 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903 for IP: 192.168.39.191
	I1119 23:06:30.931896  142733 certs.go:195] generating shared ca certs ...
	I1119 23:06:30.931917  142733 certs.go:227] acquiring lock for ca certs: {Name:mk7fd1f1adfef6333505c39c1a982465562c820e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:06:30.932057  142733 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key
	I1119 23:06:30.932118  142733 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key
	I1119 23:06:30.932128  142733 certs.go:257] generating profile certs ...
	I1119 23:06:30.932195  142733 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key
	I1119 23:06:30.932244  142733 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key.4e640f1f
	I1119 23:06:30.932279  142733 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key
	I1119 23:06:30.932291  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1119 23:06:30.932302  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1119 23:06:30.932313  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1119 23:06:30.932326  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1119 23:06:30.932335  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1119 23:06:30.932348  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1119 23:06:30.932360  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1119 23:06:30.932370  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1119 23:06:30.932416  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem (1338 bytes)
	W1119 23:06:30.932442  142733 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369_empty.pem, impossibly tiny 0 bytes
	I1119 23:06:30.932451  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 23:06:30.932473  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem (1078 bytes)
	I1119 23:06:30.932493  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem (1123 bytes)
	I1119 23:06:30.932514  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem (1675 bytes)
	I1119 23:06:30.932559  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:06:30.932585  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /usr/share/ca-certificates/1213692.pem
	I1119 23:06:30.932599  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:06:30.932609  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem -> /usr/share/ca-certificates/121369.pem
	I1119 23:06:30.934682  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:30.935112  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:30.935137  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:30.935281  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:06:31.009328  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1119 23:06:31.016386  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1119 23:06:31.030245  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1119 23:06:31.035820  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1119 23:06:31.049236  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1119 23:06:31.054346  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1119 23:06:31.067895  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1119 23:06:31.073323  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1119 23:06:31.087209  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1119 23:06:31.092290  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1119 23:06:31.105480  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1119 23:06:31.110774  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1119 23:06:31.124311  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 23:06:31.157146  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 23:06:31.188112  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 23:06:31.219707  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 23:06:31.252776  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 23:06:31.288520  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 23:06:31.324027  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 23:06:31.356576  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 23:06:31.388386  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /usr/share/ca-certificates/1213692.pem (1708 bytes)
	I1119 23:06:31.418690  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 23:06:31.450428  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem --> /usr/share/ca-certificates/121369.pem (1338 bytes)
	I1119 23:06:31.480971  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1119 23:06:31.502673  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1119 23:06:31.525149  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1119 23:06:31.547365  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1119 23:06:31.569864  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1119 23:06:31.592406  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1119 23:06:31.614323  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1119 23:06:31.638212  142733 ssh_runner.go:195] Run: openssl version
	I1119 23:06:31.645456  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 23:06:31.659620  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:06:31.665114  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:06:31.665178  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:06:31.672451  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 23:06:31.686443  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121369.pem && ln -fs /usr/share/ca-certificates/121369.pem /etc/ssl/certs/121369.pem"
	I1119 23:06:31.700888  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121369.pem
	I1119 23:06:31.706357  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:54 /usr/share/ca-certificates/121369.pem
	I1119 23:06:31.706409  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121369.pem
	I1119 23:06:31.713959  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/121369.pem /etc/ssl/certs/51391683.0"
	I1119 23:06:31.727492  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1213692.pem && ln -fs /usr/share/ca-certificates/1213692.pem /etc/ssl/certs/1213692.pem"
	I1119 23:06:31.741862  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1213692.pem
	I1119 23:06:31.747549  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:54 /usr/share/ca-certificates/1213692.pem
	I1119 23:06:31.747622  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1213692.pem
	I1119 23:06:31.755354  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1213692.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 23:06:31.769594  142733 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 23:06:31.775132  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 23:06:31.783159  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 23:06:31.790685  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 23:06:31.798517  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 23:06:31.806212  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 23:06:31.814046  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 23:06:31.822145  142733 kubeadm.go:935] updating node {m02 192.168.39.191 8443 v1.34.1 crio true true} ...
	I1119 23:06:31.822259  142733 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-487903-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.191
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 23:06:31.822290  142733 kube-vip.go:115] generating kube-vip config ...
	I1119 23:06:31.822339  142733 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1119 23:06:31.849048  142733 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1119 23:06:31.849130  142733 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1119 23:06:31.849212  142733 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 23:06:31.862438  142733 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 23:06:31.862506  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1119 23:06:31.874865  142733 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1119 23:06:31.897430  142733 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 23:06:31.918586  142733 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1119 23:06:31.939534  142733 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1119 23:06:31.943930  142733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:06:31.958780  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:06:32.100156  142733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:06:32.133415  142733 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 23:06:32.133754  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:32.133847  142733 cache.go:107] acquiring lock: {Name:mk7073b8eca670d2c11dece9275947688dc7c859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 23:06:32.133936  142733 cache.go:115] /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1119 23:06:32.133949  142733 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 113.063ยตs
	I1119 23:06:32.133960  142733 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1119 23:06:32.133970  142733 cache.go:87] Successfully saved all images to host disk.
	I1119 23:06:32.134176  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:32.135284  142733 out.go:179] * Verifying Kubernetes components...
	I1119 23:06:32.136324  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:06:32.136777  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:06:32.139351  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:32.139927  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:32.139963  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:32.140169  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:06:32.321166  142733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:06:32.321693  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:06:32.321714  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:06:32.323895  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:06:32.326607  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:32.327119  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:32.327146  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:32.327377  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:06:32.352387  142733 kapi.go:59] client config for ha-487903: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key", CAFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1119 23:06:32.352506  142733 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.15:8443
	I1119 23:06:32.352953  142733 node_ready.go:35] waiting up to 6m0s for node "ha-487903-m02" to be "Ready" ...
	I1119 23:06:32.500722  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:06:32.500745  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:06:32.503448  142733 cache_images.go:264] succeeded pushing to: ha-487903 ha-487903-m02
	I1119 23:06:34.010161  142733 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02"
	I1119 23:06:41.592816  142733 node_ready.go:49] node "ha-487903-m02" is "Ready"
	I1119 23:06:41.592846  142733 node_ready.go:38] duration metric: took 9.239866557s for node "ha-487903-m02" to be "Ready" ...
	I1119 23:06:41.592864  142733 api_server.go:52] waiting for apiserver process to appear ...
	I1119 23:06:41.592953  142733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 23:06:42.093838  142733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 23:06:42.118500  142733 api_server.go:72] duration metric: took 9.985021825s to wait for apiserver process to appear ...
	I1119 23:06:42.118528  142733 api_server.go:88] waiting for apiserver healthz status ...
	I1119 23:06:42.118547  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:06:42.123892  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:06:42.123926  142733 api_server.go:103] status: https://192.168.39.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:06:42.619715  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:06:42.637068  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:06:42.637097  142733 api_server.go:103] status: https://192.168.39.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:06:43.118897  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:06:43.133996  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:06:43.134034  142733 api_server.go:103] status: https://192.168.39.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:06:43.618675  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:06:43.661252  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:06:43.661293  142733 api_server.go:103] status: https://192.168.39.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:06:44.118914  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:06:44.149362  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:06:44.149396  142733 api_server.go:103] status: https://192.168.39.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:06:44.618983  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:06:44.670809  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:06:44.670848  142733 api_server.go:103] status: https://192.168.39.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:06:45.119579  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:06:45.130478  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:06:45.130510  142733 api_server.go:103] status: https://192.168.39.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:06:45.619260  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:06:45.628758  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 200:
	ok
	I1119 23:06:45.631891  142733 api_server.go:141] control plane version: v1.34.1
	I1119 23:06:45.631928  142733 api_server.go:131] duration metric: took 3.513391545s to wait for apiserver health ...
	I1119 23:06:45.631939  142733 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 23:06:45.660854  142733 system_pods.go:59] 26 kube-system pods found
	I1119 23:06:45.660934  142733 system_pods.go:61] "coredns-66bc5c9577-5gt2t" [4a5bca7b-6369-4f70-b467-829bf0c07711] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:06:45.660946  142733 system_pods.go:61] "coredns-66bc5c9577-zjxkb" [9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:06:45.660955  142733 system_pods.go:61] "etcd-ha-487903" [c5b86825-5332-4f13-9566-5427deb2b911] Running
	I1119 23:06:45.660965  142733 system_pods.go:61] "etcd-ha-487903-m02" [a83259c3-4655-44c3-99ac-0f3a1c270ce1] Running
	I1119 23:06:45.660971  142733 system_pods.go:61] "etcd-ha-487903-m03" [1569e0f7-59af-4ee0-8f74-bcc25085147d] Running
	I1119 23:06:45.660978  142733 system_pods.go:61] "kindnet-9zx8x" [afdae080-5eff-4da4-8aff-726047c47eee] Running
	I1119 23:06:45.660983  142733 system_pods.go:61] "kindnet-kslhw" [4b82f473-b098-45a9-adf5-8c239353c3cd] Running
	I1119 23:06:45.660988  142733 system_pods.go:61] "kindnet-p9nqh" [1dd7683b-c7e7-487c-904a-506a24f833d8] Running
	I1119 23:06:45.660995  142733 system_pods.go:61] "kindnet-s9k2l" [98a511b0-fa67-4874-8013-e8a4a0adf5f1] Running
	I1119 23:06:45.661002  142733 system_pods.go:61] "kube-apiserver-ha-487903" [ab2ffc5e-d01f-4f0d-840e-f029adf3e0c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 23:06:45.661009  142733 system_pods.go:61] "kube-apiserver-ha-487903-m02" [cf63e467-a8e4-4ef3-910a-e35e6e75afdc] Running
	I1119 23:06:45.661014  142733 system_pods.go:61] "kube-apiserver-ha-487903-m03" [a121515e-29eb-42b5-b153-f7a0046ac5c2] Running
	I1119 23:06:45.661025  142733 system_pods.go:61] "kube-controller-manager-ha-487903" [8c9a78c7-ca0e-42cd-90e1-c4c6496de2d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 23:06:45.661033  142733 system_pods.go:61] "kube-controller-manager-ha-487903-m02" [bb173b4b-c202-408f-941e-f0af81d49115] Running
	I1119 23:06:45.661038  142733 system_pods.go:61] "kube-controller-manager-ha-487903-m03" [065ea88f-3d45-429d-8e50-ca0aa1a73bcf] Running
	I1119 23:06:45.661043  142733 system_pods.go:61] "kube-proxy-77wjf" [cff3e933-3100-44ce-9215-9db308c328d9] Running
	I1119 23:06:45.661047  142733 system_pods.go:61] "kube-proxy-fk7mh" [8743ca8a-c5e5-4da6-a983-a6191d2a852a] Running
	I1119 23:06:45.661051  142733 system_pods.go:61] "kube-proxy-tkx9r" [93e33992-972a-4c46-8ecb-143a0746d256] Running
	I1119 23:06:45.661062  142733 system_pods.go:61] "kube-proxy-zxtk6" [a2e2aa8e-2dc5-4233-a89a-da39ab61fd72] Running
	I1119 23:06:45.661066  142733 system_pods.go:61] "kube-scheduler-ha-487903" [9adc6023-3fda-4823-9f02-f9ec96db2287] Running
	I1119 23:06:45.661071  142733 system_pods.go:61] "kube-scheduler-ha-487903-m02" [45efe4be-fc82-4bc3-9c0a-a731153d7a72] Running
	I1119 23:06:45.661075  142733 system_pods.go:61] "kube-scheduler-ha-487903-m03" [bdb98728-6527-4ad5-9a13-3b7d2aba1643] Running
	I1119 23:06:45.661080  142733 system_pods.go:61] "kube-vip-ha-487903" [38307c4c-3c7f-4880-a86f-c8066b3da90a] Running
	I1119 23:06:45.661084  142733 system_pods.go:61] "kube-vip-ha-487903-m02" [a4dc6a51-edac-4815-8723-d8fb6f7b6935] Running
	I1119 23:06:45.661091  142733 system_pods.go:61] "kube-vip-ha-487903-m03" [45a6021a-3c71-47ce-9553-d75c327c14f3] Running
	I1119 23:06:45.661095  142733 system_pods.go:61] "storage-provisioner" [2f8e8800-093c-4c68-ac9c-300049543509] Running
	I1119 23:06:45.661103  142733 system_pods.go:74] duration metric: took 29.156984ms to wait for pod list to return data ...
	I1119 23:06:45.661123  142733 default_sa.go:34] waiting for default service account to be created ...
	I1119 23:06:45.681470  142733 default_sa.go:45] found service account: "default"
	I1119 23:06:45.681503  142733 default_sa.go:55] duration metric: took 20.368831ms for default service account to be created ...
	I1119 23:06:45.681516  142733 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 23:06:45.756049  142733 system_pods.go:86] 26 kube-system pods found
	I1119 23:06:45.756097  142733 system_pods.go:89] "coredns-66bc5c9577-5gt2t" [4a5bca7b-6369-4f70-b467-829bf0c07711] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:06:45.756115  142733 system_pods.go:89] "coredns-66bc5c9577-zjxkb" [9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:06:45.756124  142733 system_pods.go:89] "etcd-ha-487903" [c5b86825-5332-4f13-9566-5427deb2b911] Running
	I1119 23:06:45.756130  142733 system_pods.go:89] "etcd-ha-487903-m02" [a83259c3-4655-44c3-99ac-0f3a1c270ce1] Running
	I1119 23:06:45.756141  142733 system_pods.go:89] "etcd-ha-487903-m03" [1569e0f7-59af-4ee0-8f74-bcc25085147d] Running
	I1119 23:06:45.756153  142733 system_pods.go:89] "kindnet-9zx8x" [afdae080-5eff-4da4-8aff-726047c47eee] Running
	I1119 23:06:45.756158  142733 system_pods.go:89] "kindnet-kslhw" [4b82f473-b098-45a9-adf5-8c239353c3cd] Running
	I1119 23:06:45.756163  142733 system_pods.go:89] "kindnet-p9nqh" [1dd7683b-c7e7-487c-904a-506a24f833d8] Running
	I1119 23:06:45.756168  142733 system_pods.go:89] "kindnet-s9k2l" [98a511b0-fa67-4874-8013-e8a4a0adf5f1] Running
	I1119 23:06:45.756180  142733 system_pods.go:89] "kube-apiserver-ha-487903" [ab2ffc5e-d01f-4f0d-840e-f029adf3e0c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 23:06:45.756187  142733 system_pods.go:89] "kube-apiserver-ha-487903-m02" [cf63e467-a8e4-4ef3-910a-e35e6e75afdc] Running
	I1119 23:06:45.756193  142733 system_pods.go:89] "kube-apiserver-ha-487903-m03" [a121515e-29eb-42b5-b153-f7a0046ac5c2] Running
	I1119 23:06:45.756214  142733 system_pods.go:89] "kube-controller-manager-ha-487903" [8c9a78c7-ca0e-42cd-90e1-c4c6496de2d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 23:06:45.756220  142733 system_pods.go:89] "kube-controller-manager-ha-487903-m02" [bb173b4b-c202-408f-941e-f0af81d49115] Running
	I1119 23:06:45.756227  142733 system_pods.go:89] "kube-controller-manager-ha-487903-m03" [065ea88f-3d45-429d-8e50-ca0aa1a73bcf] Running
	I1119 23:06:45.756232  142733 system_pods.go:89] "kube-proxy-77wjf" [cff3e933-3100-44ce-9215-9db308c328d9] Running
	I1119 23:06:45.756242  142733 system_pods.go:89] "kube-proxy-fk7mh" [8743ca8a-c5e5-4da6-a983-a6191d2a852a] Running
	I1119 23:06:45.756248  142733 system_pods.go:89] "kube-proxy-tkx9r" [93e33992-972a-4c46-8ecb-143a0746d256] Running
	I1119 23:06:45.756253  142733 system_pods.go:89] "kube-proxy-zxtk6" [a2e2aa8e-2dc5-4233-a89a-da39ab61fd72] Running
	I1119 23:06:45.756258  142733 system_pods.go:89] "kube-scheduler-ha-487903" [9adc6023-3fda-4823-9f02-f9ec96db2287] Running
	I1119 23:06:45.756267  142733 system_pods.go:89] "kube-scheduler-ha-487903-m02" [45efe4be-fc82-4bc3-9c0a-a731153d7a72] Running
	I1119 23:06:45.756276  142733 system_pods.go:89] "kube-scheduler-ha-487903-m03" [bdb98728-6527-4ad5-9a13-3b7d2aba1643] Running
	I1119 23:06:45.756281  142733 system_pods.go:89] "kube-vip-ha-487903" [38307c4c-3c7f-4880-a86f-c8066b3da90a] Running
	I1119 23:06:45.756286  142733 system_pods.go:89] "kube-vip-ha-487903-m02" [a4dc6a51-edac-4815-8723-d8fb6f7b6935] Running
	I1119 23:06:45.756290  142733 system_pods.go:89] "kube-vip-ha-487903-m03" [45a6021a-3c71-47ce-9553-d75c327c14f3] Running
	I1119 23:06:45.756299  142733 system_pods.go:89] "storage-provisioner" [2f8e8800-093c-4c68-ac9c-300049543509] Running
	I1119 23:06:45.756310  142733 system_pods.go:126] duration metric: took 74.786009ms to wait for k8s-apps to be running ...
	I1119 23:06:45.756320  142733 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 23:06:45.756377  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 23:06:45.804032  142733 system_svc.go:56] duration metric: took 47.697905ms WaitForService to wait for kubelet
	I1119 23:06:45.804075  142733 kubeadm.go:587] duration metric: took 13.670605736s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 23:06:45.804108  142733 node_conditions.go:102] verifying NodePressure condition ...
	I1119 23:06:45.809115  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:06:45.809156  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:06:45.809181  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:06:45.809187  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:06:45.809193  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:06:45.809200  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:06:45.809208  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:06:45.809216  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:06:45.809222  142733 node_conditions.go:105] duration metric: took 5.108401ms to run NodePressure ...
	I1119 23:06:45.809243  142733 start.go:242] waiting for startup goroutines ...
	I1119 23:06:45.809289  142733 start.go:256] writing updated cluster config ...
	I1119 23:06:45.811415  142733 out.go:203] 
	I1119 23:06:45.813102  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:45.813254  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:06:45.814787  142733 out.go:179] * Starting "ha-487903-m03" control-plane node in "ha-487903" cluster
	I1119 23:06:45.815937  142733 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:06:45.815964  142733 cache.go:65] Caching tarball of preloaded images
	I1119 23:06:45.816100  142733 preload.go:238] Found /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 23:06:45.816115  142733 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 23:06:45.816268  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:06:45.816543  142733 start.go:360] acquireMachinesLock for ha-487903-m03: {Name:mk31ae761a53f3900bcf08a8c89eb1fc69e23fe3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1119 23:06:45.816612  142733 start.go:364] duration metric: took 39.245ยตs to acquireMachinesLock for "ha-487903-m03"
	I1119 23:06:45.816630  142733 start.go:96] Skipping create...Using existing machine configuration
	I1119 23:06:45.816642  142733 fix.go:54] fixHost starting: m03
	I1119 23:06:45.818510  142733 fix.go:112] recreateIfNeeded on ha-487903-m03: state=Stopped err=<nil>
	W1119 23:06:45.818540  142733 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 23:06:45.819904  142733 out.go:252] * Restarting existing kvm2 VM for "ha-487903-m03" ...
	I1119 23:06:45.819950  142733 main.go:143] libmachine: starting domain...
	I1119 23:06:45.819961  142733 main.go:143] libmachine: ensuring networks are active...
	I1119 23:06:45.820828  142733 main.go:143] libmachine: Ensuring network default is active
	I1119 23:06:45.821278  142733 main.go:143] libmachine: Ensuring network mk-ha-487903 is active
	I1119 23:06:45.821805  142733 main.go:143] libmachine: getting domain XML...
	I1119 23:06:45.823105  142733 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>ha-487903-m03</name>
	  <uuid>e9ddbb3b-f8b5-4cd4-8c27-cb1452f23fd2</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/ha-487903-m03.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:b3:68:3d'/>
	      <source network='mk-ha-487903'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:7a:90:da'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1119 23:06:47.444391  142733 main.go:143] libmachine: waiting for domain to start...
	I1119 23:06:47.445887  142733 main.go:143] libmachine: domain is now running
	I1119 23:06:47.445908  142733 main.go:143] libmachine: waiting for IP...
	I1119 23:06:47.446706  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:06:47.447357  142733 main.go:143] libmachine: domain ha-487903-m03 has current primary IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:06:47.447380  142733 main.go:143] libmachine: found domain IP: 192.168.39.160
	I1119 23:06:47.447388  142733 main.go:143] libmachine: reserving static IP address...
	I1119 23:06:47.447950  142733 main.go:143] libmachine: found host DHCP lease matching {name: "ha-487903-m03", mac: "52:54:00:b3:68:3d", ip: "192.168.39.160"} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:50:12 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:06:47.447985  142733 main.go:143] libmachine: skip adding static IP to network mk-ha-487903 - found existing host DHCP lease matching {name: "ha-487903-m03", mac: "52:54:00:b3:68:3d", ip: "192.168.39.160"}
	I1119 23:06:47.447998  142733 main.go:143] libmachine: reserved static IP address 192.168.39.160 for domain ha-487903-m03
	I1119 23:06:47.448003  142733 main.go:143] libmachine: waiting for SSH...
	I1119 23:06:47.448010  142733 main.go:143] libmachine: Getting to WaitForSSH function...
	I1119 23:06:47.450788  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:06:47.451222  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:50:12 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:06:47.451253  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:06:47.451441  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:47.451661  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1119 23:06:47.451673  142733 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1119 23:06:50.540171  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.160:22: connect: no route to host
	I1119 23:06:56.620202  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.160:22: connect: no route to host
	I1119 23:06:59.621964  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.160:22: connect: connection refused
	I1119 23:07:02.732773  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:07:02.736628  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:02.737046  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:02.737076  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:02.737371  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:07:02.737615  142733 machine.go:94] provisionDockerMachine start ...
	I1119 23:07:02.740024  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:02.740530  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:02.740555  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:02.740752  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:02.741040  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1119 23:07:02.741054  142733 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 23:07:02.852322  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1119 23:07:02.852355  142733 buildroot.go:166] provisioning hostname "ha-487903-m03"
	I1119 23:07:02.855519  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:02.856083  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:02.856112  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:02.856309  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:02.856556  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1119 23:07:02.856572  142733 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-487903-m03 && echo "ha-487903-m03" | sudo tee /etc/hostname
	I1119 23:07:02.990322  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-487903-m03
	
	I1119 23:07:02.993714  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:02.994202  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:02.994233  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:02.994405  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:02.994627  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1119 23:07:02.994651  142733 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-487903-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-487903-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-487903-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 23:07:03.118189  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:07:03.118221  142733 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21918-117497/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-117497/.minikube}
	I1119 23:07:03.118237  142733 buildroot.go:174] setting up certificates
	I1119 23:07:03.118248  142733 provision.go:84] configureAuth start
	I1119 23:07:03.121128  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.121630  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:03.121656  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.124221  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.124569  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:03.124592  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.124715  142733 provision.go:143] copyHostCerts
	I1119 23:07:03.124748  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:07:03.124787  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem, removing ...
	I1119 23:07:03.124797  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:07:03.124892  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem (1675 bytes)
	I1119 23:07:03.125005  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:07:03.125037  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem, removing ...
	I1119 23:07:03.125047  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:07:03.125090  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem (1078 bytes)
	I1119 23:07:03.125160  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:07:03.125188  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem, removing ...
	I1119 23:07:03.125198  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:07:03.125238  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem (1123 bytes)
	I1119 23:07:03.125306  142733 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem org=jenkins.ha-487903-m03 san=[127.0.0.1 192.168.39.160 ha-487903-m03 localhost minikube]
	I1119 23:07:03.484960  142733 provision.go:177] copyRemoteCerts
	I1119 23:07:03.485022  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 23:07:03.487560  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.488008  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:03.488032  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.488178  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/id_rsa Username:docker}
	I1119 23:07:03.574034  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1119 23:07:03.574117  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 23:07:03.604129  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1119 23:07:03.604216  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1119 23:07:03.635162  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1119 23:07:03.635235  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 23:07:03.668358  142733 provision.go:87] duration metric: took 550.091154ms to configureAuth
	I1119 23:07:03.668387  142733 buildroot.go:189] setting minikube options for container-runtime
	I1119 23:07:03.668643  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:03.671745  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.672214  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:03.672242  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.672395  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:03.672584  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1119 23:07:03.672599  142733 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 23:07:03.950762  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 23:07:03.950792  142733 machine.go:97] duration metric: took 1.213162195s to provisionDockerMachine
	I1119 23:07:03.950807  142733 start.go:293] postStartSetup for "ha-487903-m03" (driver="kvm2")
	I1119 23:07:03.950821  142733 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 23:07:03.950908  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 23:07:03.954010  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.954449  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:03.954472  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.954609  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/id_rsa Username:docker}
	I1119 23:07:04.043080  142733 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 23:07:04.048534  142733 info.go:137] Remote host: Buildroot 2025.02
	I1119 23:07:04.048567  142733 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/addons for local assets ...
	I1119 23:07:04.048645  142733 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/files for local assets ...
	I1119 23:07:04.048729  142733 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> 1213692.pem in /etc/ssl/certs
	I1119 23:07:04.048741  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /etc/ssl/certs/1213692.pem
	I1119 23:07:04.048850  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 23:07:04.062005  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:07:04.095206  142733 start.go:296] duration metric: took 144.382125ms for postStartSetup
	I1119 23:07:04.095293  142733 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1119 23:07:04.097927  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.098314  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:04.098337  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.098469  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/id_rsa Username:docker}
	I1119 23:07:04.187620  142733 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I1119 23:07:04.187695  142733 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1119 23:07:04.250288  142733 fix.go:56] duration metric: took 18.433638518s for fixHost
	I1119 23:07:04.253813  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.254395  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:04.254423  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.254650  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:04.254923  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1119 23:07:04.254938  142733 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1119 23:07:04.407951  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763593624.369608325
	
	I1119 23:07:04.407981  142733 fix.go:216] guest clock: 1763593624.369608325
	I1119 23:07:04.407992  142733 fix.go:229] Guest: 2025-11-19 23:07:04.369608325 +0000 UTC Remote: 2025-11-19 23:07:04.250316644 +0000 UTC m=+71.594560791 (delta=119.291681ms)
	I1119 23:07:04.408018  142733 fix.go:200] guest clock delta is within tolerance: 119.291681ms
	I1119 23:07:04.408026  142733 start.go:83] releasing machines lock for "ha-487903-m03", held for 18.591403498s
	I1119 23:07:04.411093  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.411490  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:04.411518  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.413431  142733 out.go:179] * Found network options:
	I1119 23:07:04.414774  142733 out.go:179]   - NO_PROXY=192.168.39.15,192.168.39.191
	W1119 23:07:04.415854  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:07:04.415891  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:07:04.416317  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:07:04.416348  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	I1119 23:07:04.416422  142733 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 23:07:04.416436  142733 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 23:07:04.419695  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.419745  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.420204  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:04.420228  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.420310  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:04.420352  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.420397  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/id_rsa Username:docker}
	I1119 23:07:04.420643  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/id_rsa Username:docker}
	I1119 23:07:04.657635  142733 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 23:07:04.665293  142733 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 23:07:04.665372  142733 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 23:07:04.689208  142733 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 23:07:04.689244  142733 start.go:496] detecting cgroup driver to use...
	I1119 23:07:04.689352  142733 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 23:07:04.714215  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 23:07:04.733166  142733 docker.go:218] disabling cri-docker service (if available) ...
	I1119 23:07:04.733238  142733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 23:07:04.756370  142733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 23:07:04.778280  142733 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 23:07:04.943140  142733 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 23:07:05.174139  142733 docker.go:234] disabling docker service ...
	I1119 23:07:05.174230  142733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 23:07:05.192652  142733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 23:07:05.219388  142733 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 23:07:05.383745  142733 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 23:07:05.538084  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 23:07:05.555554  142733 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 23:07:05.579503  142733 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 23:07:05.579567  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:05.593464  142733 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 23:07:05.593530  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:05.609133  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:05.624066  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:05.637817  142733 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 23:07:05.653008  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:05.666833  142733 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:05.691556  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:05.705398  142733 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 23:07:05.717404  142733 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1119 23:07:05.717480  142733 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1119 23:07:05.740569  142733 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 23:07:05.753510  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:07:05.907119  142733 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 23:07:06.048396  142733 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 23:07:06.048486  142733 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 23:07:06.055638  142733 start.go:564] Will wait 60s for crictl version
	I1119 23:07:06.055719  142733 ssh_runner.go:195] Run: which crictl
	I1119 23:07:06.061562  142733 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1119 23:07:06.110271  142733 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1119 23:07:06.110342  142733 ssh_runner.go:195] Run: crio --version
	I1119 23:07:06.146231  142733 ssh_runner.go:195] Run: crio --version
	I1119 23:07:06.178326  142733 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1119 23:07:06.179543  142733 out.go:179]   - env NO_PROXY=192.168.39.15
	I1119 23:07:06.180760  142733 out.go:179]   - env NO_PROXY=192.168.39.15,192.168.39.191
	I1119 23:07:06.184561  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:06.184934  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:06.184957  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:06.185144  142733 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1119 23:07:06.190902  142733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:07:06.207584  142733 mustload.go:66] Loading cluster: ha-487903
	I1119 23:07:06.207839  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:06.209435  142733 host.go:66] Checking if "ha-487903" exists ...
	I1119 23:07:06.209634  142733 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903 for IP: 192.168.39.160
	I1119 23:07:06.209644  142733 certs.go:195] generating shared ca certs ...
	I1119 23:07:06.209656  142733 certs.go:227] acquiring lock for ca certs: {Name:mk7fd1f1adfef6333505c39c1a982465562c820e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:07:06.209760  142733 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key
	I1119 23:07:06.209804  142733 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key
	I1119 23:07:06.209811  142733 certs.go:257] generating profile certs ...
	I1119 23:07:06.209893  142733 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key
	I1119 23:07:06.209959  142733 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key.0aa3aad5
	I1119 23:07:06.210018  142733 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key
	I1119 23:07:06.210035  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1119 23:07:06.210054  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1119 23:07:06.210067  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1119 23:07:06.210080  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1119 23:07:06.210091  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1119 23:07:06.210102  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1119 23:07:06.210114  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1119 23:07:06.210126  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1119 23:07:06.210182  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem (1338 bytes)
	W1119 23:07:06.210223  142733 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369_empty.pem, impossibly tiny 0 bytes
	I1119 23:07:06.210235  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 23:07:06.210266  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem (1078 bytes)
	I1119 23:07:06.210291  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem (1123 bytes)
	I1119 23:07:06.210312  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem (1675 bytes)
	I1119 23:07:06.210372  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:07:06.210412  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:07:06.210426  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem -> /usr/share/ca-certificates/121369.pem
	I1119 23:07:06.210444  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /usr/share/ca-certificates/1213692.pem
	I1119 23:07:06.213240  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:07:06.213640  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:07:06.213661  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:07:06.213778  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:07:06.286328  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1119 23:07:06.292502  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1119 23:07:06.306380  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1119 23:07:06.311916  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1119 23:07:06.325372  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1119 23:07:06.331268  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1119 23:07:06.346732  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1119 23:07:06.351946  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1119 23:07:06.366848  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1119 23:07:06.372483  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1119 23:07:06.389518  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1119 23:07:06.395938  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1119 23:07:06.409456  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 23:07:06.450401  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 23:07:06.486719  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 23:07:06.523798  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 23:07:06.561368  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 23:07:06.599512  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 23:07:06.634946  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 23:07:06.670031  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 23:07:06.704068  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 23:07:06.735677  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem --> /usr/share/ca-certificates/121369.pem (1338 bytes)
	I1119 23:07:06.768990  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /usr/share/ca-certificates/1213692.pem (1708 bytes)
	I1119 23:07:06.806854  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1119 23:07:06.832239  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1119 23:07:06.856375  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1119 23:07:06.879310  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1119 23:07:06.902404  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1119 23:07:06.927476  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1119 23:07:06.952223  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1119 23:07:06.974196  142733 ssh_runner.go:195] Run: openssl version
	I1119 23:07:06.981644  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 23:07:06.999412  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:07:07.005373  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:07:07.005446  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:07:07.013895  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 23:07:07.031130  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121369.pem && ln -fs /usr/share/ca-certificates/121369.pem /etc/ssl/certs/121369.pem"
	I1119 23:07:07.046043  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121369.pem
	I1119 23:07:07.051937  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:54 /usr/share/ca-certificates/121369.pem
	I1119 23:07:07.052014  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121369.pem
	I1119 23:07:07.059543  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/121369.pem /etc/ssl/certs/51391683.0"
	I1119 23:07:07.078500  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1213692.pem && ln -fs /usr/share/ca-certificates/1213692.pem /etc/ssl/certs/1213692.pem"
	I1119 23:07:07.093375  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1213692.pem
	I1119 23:07:07.099508  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:54 /usr/share/ca-certificates/1213692.pem
	I1119 23:07:07.099578  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1213692.pem
	I1119 23:07:07.107551  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1213692.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 23:07:07.123243  142733 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 23:07:07.129696  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 23:07:07.137849  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 23:07:07.145809  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 23:07:07.153731  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 23:07:07.161120  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 23:07:07.168309  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 23:07:07.176142  142733 kubeadm.go:935] updating node {m03 192.168.39.160 8443 v1.34.1 crio true true} ...
	I1119 23:07:07.176256  142733 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-487903-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.160
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 23:07:07.176285  142733 kube-vip.go:115] generating kube-vip config ...
	I1119 23:07:07.176329  142733 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1119 23:07:07.203479  142733 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1119 23:07:07.203570  142733 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1119 23:07:07.203646  142733 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 23:07:07.217413  142733 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 23:07:07.217503  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1119 23:07:07.230746  142733 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1119 23:07:07.256658  142733 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 23:07:07.282507  142733 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1119 23:07:07.305975  142733 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1119 23:07:07.311016  142733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:07:07.328648  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:07:07.494364  142733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:07:07.517777  142733 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 23:07:07.518159  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:07.518271  142733 cache.go:107] acquiring lock: {Name:mk7073b8eca670d2c11dece9275947688dc7c859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 23:07:07.518379  142733 cache.go:115] /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1119 23:07:07.518395  142733 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 133.678ยตs
	I1119 23:07:07.518407  142733 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1119 23:07:07.518421  142733 cache.go:87] Successfully saved all images to host disk.
	I1119 23:07:07.518647  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:07.520684  142733 out.go:179] * Verifying Kubernetes components...
	I1119 23:07:07.520832  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:07:07.521966  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:07:07.523804  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:07:07.524372  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:07:07.524416  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:07:07.524599  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:07:07.723792  142733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:07:07.724326  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:07:07.724350  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:07:07.726364  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:07:07.728774  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:07:07.729239  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:07:07.729270  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:07:07.729424  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:07:07.746212  142733 kapi.go:59] client config for ha-487903: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key", CAFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1119 23:07:07.746278  142733 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.15:8443
	I1119 23:07:07.746586  142733 node_ready.go:35] waiting up to 6m0s for node "ha-487903-m03" to be "Ready" ...
	I1119 23:07:07.858504  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:07:07.858530  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:07:07.860355  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:07:07.862516  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:07.862974  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:07.863000  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:07.863200  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/id_rsa Username:docker}
	I1119 23:07:08.011441  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:07:08.011468  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:07:08.013393  142733 cache_images.go:264] succeeded pushing to: ha-487903 ha-487903-m02 ha-487903-m03
	W1119 23:07:09.751904  142733 node_ready.go:57] node "ha-487903-m03" has "Ready":"Unknown" status (will retry)
	W1119 23:07:12.252353  142733 node_ready.go:57] node "ha-487903-m03" has "Ready":"Unknown" status (will retry)
	W1119 23:07:14.254075  142733 node_ready.go:57] node "ha-487903-m03" has "Ready":"Unknown" status (will retry)
	W1119 23:07:16.256443  142733 node_ready.go:57] node "ha-487903-m03" has "Ready":"Unknown" status (will retry)
	W1119 23:07:18.752485  142733 node_ready.go:57] node "ha-487903-m03" has "Ready":"Unknown" status (will retry)
	I1119 23:07:19.751738  142733 node_ready.go:49] node "ha-487903-m03" is "Ready"
	I1119 23:07:19.751783  142733 node_ready.go:38] duration metric: took 12.005173883s for node "ha-487903-m03" to be "Ready" ...
	I1119 23:07:19.751803  142733 api_server.go:52] waiting for apiserver process to appear ...
	I1119 23:07:19.751911  142733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 23:07:19.833604  142733 api_server.go:72] duration metric: took 12.315777974s to wait for apiserver process to appear ...
	I1119 23:07:19.833635  142733 api_server.go:88] waiting for apiserver healthz status ...
	I1119 23:07:19.833668  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:07:19.841482  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 200:
	ok
	I1119 23:07:19.842905  142733 api_server.go:141] control plane version: v1.34.1
	I1119 23:07:19.842932  142733 api_server.go:131] duration metric: took 9.287176ms to wait for apiserver health ...
	I1119 23:07:19.842951  142733 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 23:07:19.855636  142733 system_pods.go:59] 26 kube-system pods found
	I1119 23:07:19.855671  142733 system_pods.go:61] "coredns-66bc5c9577-5gt2t" [4a5bca7b-6369-4f70-b467-829bf0c07711] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:07:19.855679  142733 system_pods.go:61] "coredns-66bc5c9577-zjxkb" [9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5] Running
	I1119 23:07:19.855689  142733 system_pods.go:61] "etcd-ha-487903" [c5b86825-5332-4f13-9566-5427deb2b911] Running
	I1119 23:07:19.855695  142733 system_pods.go:61] "etcd-ha-487903-m02" [a83259c3-4655-44c3-99ac-0f3a1c270ce1] Running
	I1119 23:07:19.855700  142733 system_pods.go:61] "etcd-ha-487903-m03" [1569e0f7-59af-4ee0-8f74-bcc25085147d] Running
	I1119 23:07:19.855705  142733 system_pods.go:61] "kindnet-9zx8x" [afdae080-5eff-4da4-8aff-726047c47eee] Running
	I1119 23:07:19.855710  142733 system_pods.go:61] "kindnet-kslhw" [4b82f473-b098-45a9-adf5-8c239353c3cd] Running
	I1119 23:07:19.855714  142733 system_pods.go:61] "kindnet-p9nqh" [1dd7683b-c7e7-487c-904a-506a24f833d8] Running
	I1119 23:07:19.855724  142733 system_pods.go:61] "kindnet-s9k2l" [98a511b0-fa67-4874-8013-e8a4a0adf5f1] Running
	I1119 23:07:19.855733  142733 system_pods.go:61] "kube-apiserver-ha-487903" [ab2ffc5e-d01f-4f0d-840e-f029adf3e0c1] Running
	I1119 23:07:19.855738  142733 system_pods.go:61] "kube-apiserver-ha-487903-m02" [cf63e467-a8e4-4ef3-910a-e35e6e75afdc] Running
	I1119 23:07:19.855743  142733 system_pods.go:61] "kube-apiserver-ha-487903-m03" [a121515e-29eb-42b5-b153-f7a0046ac5c2] Running
	I1119 23:07:19.855747  142733 system_pods.go:61] "kube-controller-manager-ha-487903" [8c9a78c7-ca0e-42cd-90e1-c4c6496de2d8] Running
	I1119 23:07:19.855753  142733 system_pods.go:61] "kube-controller-manager-ha-487903-m02" [bb173b4b-c202-408f-941e-f0af81d49115] Running
	I1119 23:07:19.855760  142733 system_pods.go:61] "kube-controller-manager-ha-487903-m03" [065ea88f-3d45-429d-8e50-ca0aa1a73bcf] Running
	I1119 23:07:19.855764  142733 system_pods.go:61] "kube-proxy-77wjf" [cff3e933-3100-44ce-9215-9db308c328d9] Running
	I1119 23:07:19.855769  142733 system_pods.go:61] "kube-proxy-fk7mh" [8743ca8a-c5e5-4da6-a983-a6191d2a852a] Running
	I1119 23:07:19.855774  142733 system_pods.go:61] "kube-proxy-tkx9r" [93e33992-972a-4c46-8ecb-143a0746d256] Running
	I1119 23:07:19.855778  142733 system_pods.go:61] "kube-proxy-zxtk6" [a2e2aa8e-2dc5-4233-a89a-da39ab61fd72] Running
	I1119 23:07:19.855783  142733 system_pods.go:61] "kube-scheduler-ha-487903" [9adc6023-3fda-4823-9f02-f9ec96db2287] Running
	I1119 23:07:19.855793  142733 system_pods.go:61] "kube-scheduler-ha-487903-m02" [45efe4be-fc82-4bc3-9c0a-a731153d7a72] Running
	I1119 23:07:19.855797  142733 system_pods.go:61] "kube-scheduler-ha-487903-m03" [bdb98728-6527-4ad5-9a13-3b7d2aba1643] Running
	I1119 23:07:19.855802  142733 system_pods.go:61] "kube-vip-ha-487903" [38307c4c-3c7f-4880-a86f-c8066b3da90a] Running
	I1119 23:07:19.855806  142733 system_pods.go:61] "kube-vip-ha-487903-m02" [a4dc6a51-edac-4815-8723-d8fb6f7b6935] Running
	I1119 23:07:19.855814  142733 system_pods.go:61] "kube-vip-ha-487903-m03" [45a6021a-3c71-47ce-9553-d75c327c14f3] Running
	I1119 23:07:19.855818  142733 system_pods.go:61] "storage-provisioner" [2f8e8800-093c-4c68-ac9c-300049543509] Running
	I1119 23:07:19.855827  142733 system_pods.go:74] duration metric: took 12.86809ms to wait for pod list to return data ...
	I1119 23:07:19.855842  142733 default_sa.go:34] waiting for default service account to be created ...
	I1119 23:07:19.860573  142733 default_sa.go:45] found service account: "default"
	I1119 23:07:19.860597  142733 default_sa.go:55] duration metric: took 4.749483ms for default service account to be created ...
	I1119 23:07:19.860606  142733 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 23:07:19.870790  142733 system_pods.go:86] 26 kube-system pods found
	I1119 23:07:19.870825  142733 system_pods.go:89] "coredns-66bc5c9577-5gt2t" [4a5bca7b-6369-4f70-b467-829bf0c07711] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:07:19.870831  142733 system_pods.go:89] "coredns-66bc5c9577-zjxkb" [9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5] Running
	I1119 23:07:19.870836  142733 system_pods.go:89] "etcd-ha-487903" [c5b86825-5332-4f13-9566-5427deb2b911] Running
	I1119 23:07:19.870840  142733 system_pods.go:89] "etcd-ha-487903-m02" [a83259c3-4655-44c3-99ac-0f3a1c270ce1] Running
	I1119 23:07:19.870843  142733 system_pods.go:89] "etcd-ha-487903-m03" [1569e0f7-59af-4ee0-8f74-bcc25085147d] Running
	I1119 23:07:19.870847  142733 system_pods.go:89] "kindnet-9zx8x" [afdae080-5eff-4da4-8aff-726047c47eee] Running
	I1119 23:07:19.870851  142733 system_pods.go:89] "kindnet-kslhw" [4b82f473-b098-45a9-adf5-8c239353c3cd] Running
	I1119 23:07:19.870854  142733 system_pods.go:89] "kindnet-p9nqh" [1dd7683b-c7e7-487c-904a-506a24f833d8] Running
	I1119 23:07:19.870857  142733 system_pods.go:89] "kindnet-s9k2l" [98a511b0-fa67-4874-8013-e8a4a0adf5f1] Running
	I1119 23:07:19.870861  142733 system_pods.go:89] "kube-apiserver-ha-487903" [ab2ffc5e-d01f-4f0d-840e-f029adf3e0c1] Running
	I1119 23:07:19.870865  142733 system_pods.go:89] "kube-apiserver-ha-487903-m02" [cf63e467-a8e4-4ef3-910a-e35e6e75afdc] Running
	I1119 23:07:19.870870  142733 system_pods.go:89] "kube-apiserver-ha-487903-m03" [a121515e-29eb-42b5-b153-f7a0046ac5c2] Running
	I1119 23:07:19.870895  142733 system_pods.go:89] "kube-controller-manager-ha-487903" [8c9a78c7-ca0e-42cd-90e1-c4c6496de2d8] Running
	I1119 23:07:19.870902  142733 system_pods.go:89] "kube-controller-manager-ha-487903-m02" [bb173b4b-c202-408f-941e-f0af81d49115] Running
	I1119 23:07:19.870911  142733 system_pods.go:89] "kube-controller-manager-ha-487903-m03" [065ea88f-3d45-429d-8e50-ca0aa1a73bcf] Running
	I1119 23:07:19.870916  142733 system_pods.go:89] "kube-proxy-77wjf" [cff3e933-3100-44ce-9215-9db308c328d9] Running
	I1119 23:07:19.870924  142733 system_pods.go:89] "kube-proxy-fk7mh" [8743ca8a-c5e5-4da6-a983-a6191d2a852a] Running
	I1119 23:07:19.870929  142733 system_pods.go:89] "kube-proxy-tkx9r" [93e33992-972a-4c46-8ecb-143a0746d256] Running
	I1119 23:07:19.870936  142733 system_pods.go:89] "kube-proxy-zxtk6" [a2e2aa8e-2dc5-4233-a89a-da39ab61fd72] Running
	I1119 23:07:19.870941  142733 system_pods.go:89] "kube-scheduler-ha-487903" [9adc6023-3fda-4823-9f02-f9ec96db2287] Running
	I1119 23:07:19.870946  142733 system_pods.go:89] "kube-scheduler-ha-487903-m02" [45efe4be-fc82-4bc3-9c0a-a731153d7a72] Running
	I1119 23:07:19.870953  142733 system_pods.go:89] "kube-scheduler-ha-487903-m03" [bdb98728-6527-4ad5-9a13-3b7d2aba1643] Running
	I1119 23:07:19.870957  142733 system_pods.go:89] "kube-vip-ha-487903" [38307c4c-3c7f-4880-a86f-c8066b3da90a] Running
	I1119 23:07:19.870963  142733 system_pods.go:89] "kube-vip-ha-487903-m02" [a4dc6a51-edac-4815-8723-d8fb6f7b6935] Running
	I1119 23:07:19.870966  142733 system_pods.go:89] "kube-vip-ha-487903-m03" [45a6021a-3c71-47ce-9553-d75c327c14f3] Running
	I1119 23:07:19.870969  142733 system_pods.go:89] "storage-provisioner" [2f8e8800-093c-4c68-ac9c-300049543509] Running
	I1119 23:07:19.870982  142733 system_pods.go:126] duration metric: took 10.369487ms to wait for k8s-apps to be running ...
	I1119 23:07:19.870995  142733 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 23:07:19.871070  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 23:07:19.923088  142733 system_svc.go:56] duration metric: took 52.080591ms WaitForService to wait for kubelet
	I1119 23:07:19.923137  142733 kubeadm.go:587] duration metric: took 12.405311234s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 23:07:19.923168  142733 node_conditions.go:102] verifying NodePressure condition ...
	I1119 23:07:19.930259  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:07:19.930299  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:07:19.930316  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:07:19.930323  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:07:19.930329  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:07:19.930334  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:07:19.930343  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:07:19.930352  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:07:19.930359  142733 node_conditions.go:105] duration metric: took 7.184829ms to run NodePressure ...
	I1119 23:07:19.930381  142733 start.go:242] waiting for startup goroutines ...
	I1119 23:07:19.930425  142733 start.go:256] writing updated cluster config ...
	I1119 23:07:19.932180  142733 out.go:203] 
	I1119 23:07:19.934088  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:19.934226  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:07:19.935991  142733 out.go:179] * Starting "ha-487903-m04" worker node in "ha-487903" cluster
	I1119 23:07:19.937566  142733 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:07:19.937584  142733 cache.go:65] Caching tarball of preloaded images
	I1119 23:07:19.937693  142733 preload.go:238] Found /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 23:07:19.937716  142733 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 23:07:19.937810  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:07:19.938027  142733 start.go:360] acquireMachinesLock for ha-487903-m04: {Name:mk31ae761a53f3900bcf08a8c89eb1fc69e23fe3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1119 23:07:19.938076  142733 start.go:364] duration metric: took 28.868ยตs to acquireMachinesLock for "ha-487903-m04"
	I1119 23:07:19.938095  142733 start.go:96] Skipping create...Using existing machine configuration
	I1119 23:07:19.938109  142733 fix.go:54] fixHost starting: m04
	I1119 23:07:19.940296  142733 fix.go:112] recreateIfNeeded on ha-487903-m04: state=Stopped err=<nil>
	W1119 23:07:19.940327  142733 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 23:07:19.942168  142733 out.go:252] * Restarting existing kvm2 VM for "ha-487903-m04" ...
	I1119 23:07:19.942220  142733 main.go:143] libmachine: starting domain...
	I1119 23:07:19.942265  142733 main.go:143] libmachine: ensuring networks are active...
	I1119 23:07:19.943145  142733 main.go:143] libmachine: Ensuring network default is active
	I1119 23:07:19.943566  142733 main.go:143] libmachine: Ensuring network mk-ha-487903 is active
	I1119 23:07:19.944170  142733 main.go:143] libmachine: getting domain XML...
	I1119 23:07:19.945811  142733 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>ha-487903-m04</name>
	  <uuid>2ce148a1-b982-46f6-ada0-6a5a5b14ddce</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/ha-487903-m04.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:eb:f3:c3'/>
	      <source network='mk-ha-487903'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:03:3a:d4'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1119 23:07:21.541216  142733 main.go:143] libmachine: waiting for domain to start...
	I1119 23:07:21.542947  142733 main.go:143] libmachine: domain is now running
	I1119 23:07:21.542968  142733 main.go:143] libmachine: waiting for IP...
	I1119 23:07:21.543929  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:21.544529  142733 main.go:143] libmachine: domain ha-487903-m04 has current primary IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:21.544546  142733 main.go:143] libmachine: found domain IP: 192.168.39.187
	I1119 23:07:21.544554  142733 main.go:143] libmachine: reserving static IP address...
	I1119 23:07:21.545091  142733 main.go:143] libmachine: found host DHCP lease matching {name: "ha-487903-m04", mac: "52:54:00:eb:f3:c3", ip: "192.168.39.187"} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:51:45 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:21.545120  142733 main.go:143] libmachine: skip adding static IP to network mk-ha-487903 - found existing host DHCP lease matching {name: "ha-487903-m04", mac: "52:54:00:eb:f3:c3", ip: "192.168.39.187"}
	I1119 23:07:21.545133  142733 main.go:143] libmachine: reserved static IP address 192.168.39.187 for domain ha-487903-m04
	I1119 23:07:21.545137  142733 main.go:143] libmachine: waiting for SSH...
	I1119 23:07:21.545142  142733 main.go:143] libmachine: Getting to WaitForSSH function...
	I1119 23:07:21.547650  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:21.548218  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:51:45 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:21.548249  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:21.548503  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:21.548718  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1119 23:07:21.548730  142733 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1119 23:07:24.652184  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.187:22: connect: no route to host
	I1119 23:07:30.732203  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.187:22: connect: no route to host
	I1119 23:07:34.764651  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.187:22: connect: connection refused
	I1119 23:07:37.880284  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:07:37.884099  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:37.884565  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:37.884591  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:37.884934  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:07:37.885280  142733 machine.go:94] provisionDockerMachine start ...
	I1119 23:07:37.887971  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:37.888368  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:37.888391  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:37.888542  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:37.888720  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1119 23:07:37.888729  142733 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 23:07:37.998350  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1119 23:07:37.998394  142733 buildroot.go:166] provisioning hostname "ha-487903-m04"
	I1119 23:07:38.002080  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.002563  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:38.002588  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.002794  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:38.003043  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1119 23:07:38.003057  142733 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-487903-m04 && echo "ha-487903-m04" | sudo tee /etc/hostname
	I1119 23:07:38.135349  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-487903-m04
	
	I1119 23:07:38.138757  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.139357  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:38.139392  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.139707  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:38.140010  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1119 23:07:38.140053  142733 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-487903-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-487903-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-487903-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 23:07:38.264087  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:07:38.264126  142733 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21918-117497/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-117497/.minikube}
	I1119 23:07:38.264149  142733 buildroot.go:174] setting up certificates
	I1119 23:07:38.264161  142733 provision.go:84] configureAuth start
	I1119 23:07:38.267541  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.268176  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:38.268215  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.270752  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.271136  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:38.271156  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.271421  142733 provision.go:143] copyHostCerts
	I1119 23:07:38.271453  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:07:38.271483  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem, removing ...
	I1119 23:07:38.271492  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:07:38.271573  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem (1675 bytes)
	I1119 23:07:38.271646  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:07:38.271664  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem, removing ...
	I1119 23:07:38.271667  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:07:38.271693  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem (1078 bytes)
	I1119 23:07:38.271735  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:07:38.271751  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem, removing ...
	I1119 23:07:38.271757  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:07:38.271779  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem (1123 bytes)
	I1119 23:07:38.271823  142733 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem org=jenkins.ha-487903-m04 san=[127.0.0.1 192.168.39.187 ha-487903-m04 localhost minikube]
	I1119 23:07:38.932314  142733 provision.go:177] copyRemoteCerts
	I1119 23:07:38.932380  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 23:07:38.935348  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.935810  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:38.935836  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.936006  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/id_rsa Username:docker}
	I1119 23:07:39.025808  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1119 23:07:39.025896  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 23:07:39.060783  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1119 23:07:39.060907  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1119 23:07:39.093470  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1119 23:07:39.093540  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1119 23:07:39.126116  142733 provision.go:87] duration metric: took 861.930238ms to configureAuth
	I1119 23:07:39.126158  142733 buildroot.go:189] setting minikube options for container-runtime
	I1119 23:07:39.126455  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:39.129733  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.130126  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:39.130155  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.130312  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:39.130560  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1119 23:07:39.130587  142733 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 23:07:39.433038  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 23:07:39.433084  142733 machine.go:97] duration metric: took 1.547777306s to provisionDockerMachine
	I1119 23:07:39.433101  142733 start.go:293] postStartSetup for "ha-487903-m04" (driver="kvm2")
	I1119 23:07:39.433114  142733 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 23:07:39.433178  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 23:07:39.436063  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.436658  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:39.436689  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.436985  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/id_rsa Username:docker}
	I1119 23:07:39.524100  142733 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 23:07:39.529723  142733 info.go:137] Remote host: Buildroot 2025.02
	I1119 23:07:39.529752  142733 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/addons for local assets ...
	I1119 23:07:39.529847  142733 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/files for local assets ...
	I1119 23:07:39.529973  142733 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> 1213692.pem in /etc/ssl/certs
	I1119 23:07:39.529988  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /etc/ssl/certs/1213692.pem
	I1119 23:07:39.530101  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 23:07:39.544274  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:07:39.576039  142733 start.go:296] duration metric: took 142.916645ms for postStartSetup
	I1119 23:07:39.576112  142733 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1119 23:07:39.578695  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.579305  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:39.579334  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.579504  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/id_rsa Username:docker}
	I1119 23:07:39.668947  142733 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I1119 23:07:39.669041  142733 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1119 23:07:39.733896  142733 fix.go:56] duration metric: took 19.795762355s for fixHost
	I1119 23:07:39.737459  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.738018  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:39.738061  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.738362  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:39.738661  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1119 23:07:39.738687  142733 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1119 23:07:39.869213  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763593659.839682658
	
	I1119 23:07:39.869234  142733 fix.go:216] guest clock: 1763593659.839682658
	I1119 23:07:39.869241  142733 fix.go:229] Guest: 2025-11-19 23:07:39.839682658 +0000 UTC Remote: 2025-11-19 23:07:39.733931353 +0000 UTC m=+107.078175487 (delta=105.751305ms)
	I1119 23:07:39.869257  142733 fix.go:200] guest clock delta is within tolerance: 105.751305ms
	I1119 23:07:39.869262  142733 start.go:83] releasing machines lock for "ha-487903-m04", held for 19.931174771s
	I1119 23:07:39.872591  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.873064  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:39.873085  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.875110  142733 out.go:179] * Found network options:
	I1119 23:07:39.876331  142733 out.go:179]   - NO_PROXY=192.168.39.15,192.168.39.191,192.168.39.160
	W1119 23:07:39.877435  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:07:39.877458  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:07:39.877478  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:07:39.877889  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:07:39.877920  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:07:39.877932  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	I1119 23:07:39.877962  142733 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 23:07:39.877987  142733 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 23:07:39.881502  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.881991  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.882088  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:39.882128  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.882283  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/id_rsa Username:docker}
	I1119 23:07:39.882500  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:39.882524  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.882696  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/id_rsa Username:docker}
	I1119 23:07:40.118089  142733 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 23:07:40.126955  142733 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 23:07:40.127054  142733 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 23:07:40.150315  142733 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 23:07:40.150351  142733 start.go:496] detecting cgroup driver to use...
	I1119 23:07:40.150436  142733 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 23:07:40.176112  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 23:07:40.195069  142733 docker.go:218] disabling cri-docker service (if available) ...
	I1119 23:07:40.195148  142733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 23:07:40.217113  142733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 23:07:40.240578  142733 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 23:07:40.404108  142733 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 23:07:40.642170  142733 docker.go:234] disabling docker service ...
	I1119 23:07:40.642260  142733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 23:07:40.659709  142733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 23:07:40.677698  142733 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 23:07:40.845769  142733 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 23:07:41.005373  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 23:07:41.028115  142733 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 23:07:41.057337  142733 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 23:07:41.057425  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:41.072373  142733 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 23:07:41.072466  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:41.086681  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:41.100921  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:41.115817  142733 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 23:07:41.132398  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:41.149261  142733 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:41.174410  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:41.189666  142733 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 23:07:41.202599  142733 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1119 23:07:41.202679  142733 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1119 23:07:41.228059  142733 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 23:07:41.243031  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:07:41.403712  142733 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 23:07:41.527678  142733 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 23:07:41.527765  142733 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 23:07:41.534539  142733 start.go:564] Will wait 60s for crictl version
	I1119 23:07:41.534620  142733 ssh_runner.go:195] Run: which crictl
	I1119 23:07:41.539532  142733 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1119 23:07:41.585994  142733 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1119 23:07:41.586086  142733 ssh_runner.go:195] Run: crio --version
	I1119 23:07:41.621736  142733 ssh_runner.go:195] Run: crio --version
	I1119 23:07:41.656086  142733 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1119 23:07:41.657482  142733 out.go:179]   - env NO_PROXY=192.168.39.15
	I1119 23:07:41.658756  142733 out.go:179]   - env NO_PROXY=192.168.39.15,192.168.39.191
	I1119 23:07:41.659970  142733 out.go:179]   - env NO_PROXY=192.168.39.15,192.168.39.191,192.168.39.160
	I1119 23:07:41.664105  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:41.664530  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:41.664550  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:41.664716  142733 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1119 23:07:41.670624  142733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:07:41.688618  142733 mustload.go:66] Loading cluster: ha-487903
	I1119 23:07:41.688858  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:41.690292  142733 host.go:66] Checking if "ha-487903" exists ...
	I1119 23:07:41.690482  142733 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903 for IP: 192.168.39.187
	I1119 23:07:41.690491  142733 certs.go:195] generating shared ca certs ...
	I1119 23:07:41.690504  142733 certs.go:227] acquiring lock for ca certs: {Name:mk7fd1f1adfef6333505c39c1a982465562c820e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:07:41.690631  142733 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key
	I1119 23:07:41.690692  142733 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key
	I1119 23:07:41.690711  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1119 23:07:41.690731  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1119 23:07:41.690750  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1119 23:07:41.690768  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1119 23:07:41.690840  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem (1338 bytes)
	W1119 23:07:41.690886  142733 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369_empty.pem, impossibly tiny 0 bytes
	I1119 23:07:41.690897  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 23:07:41.690917  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem (1078 bytes)
	I1119 23:07:41.690937  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem (1123 bytes)
	I1119 23:07:41.690958  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem (1675 bytes)
	I1119 23:07:41.690994  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:07:41.691025  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:07:41.691038  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem -> /usr/share/ca-certificates/121369.pem
	I1119 23:07:41.691048  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /usr/share/ca-certificates/1213692.pem
	I1119 23:07:41.691068  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 23:07:41.726185  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 23:07:41.762445  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 23:07:41.804578  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 23:07:41.841391  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 23:07:41.881178  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem --> /usr/share/ca-certificates/121369.pem (1338 bytes)
	I1119 23:07:41.917258  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /usr/share/ca-certificates/1213692.pem (1708 bytes)
	I1119 23:07:41.953489  142733 ssh_runner.go:195] Run: openssl version
	I1119 23:07:41.961333  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 23:07:41.977066  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:07:41.983550  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:07:41.983610  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:07:41.991656  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 23:07:42.006051  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121369.pem && ln -fs /usr/share/ca-certificates/121369.pem /etc/ssl/certs/121369.pem"
	I1119 23:07:42.021516  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121369.pem
	I1119 23:07:42.028801  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:54 /usr/share/ca-certificates/121369.pem
	I1119 23:07:42.028900  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121369.pem
	I1119 23:07:42.036899  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/121369.pem /etc/ssl/certs/51391683.0"
	I1119 23:07:42.052553  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1213692.pem && ln -fs /usr/share/ca-certificates/1213692.pem /etc/ssl/certs/1213692.pem"
	I1119 23:07:42.067472  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1213692.pem
	I1119 23:07:42.073674  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:54 /usr/share/ca-certificates/1213692.pem
	I1119 23:07:42.073751  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1213692.pem
	I1119 23:07:42.081607  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1213692.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 23:07:42.096183  142733 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 23:07:42.101534  142733 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 23:07:42.101590  142733 kubeadm.go:935] updating node {m04 192.168.39.187 0 v1.34.1 crio false true} ...
	I1119 23:07:42.101683  142733 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-487903-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 23:07:42.101762  142733 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 23:07:42.115471  142733 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 23:07:42.115548  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1119 23:07:42.129019  142733 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1119 23:07:42.153030  142733 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 23:07:42.178425  142733 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1119 23:07:42.183443  142733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:07:42.200493  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:07:42.356810  142733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:07:42.394017  142733 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1119 23:07:42.394368  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:42.394458  142733 cache.go:107] acquiring lock: {Name:mk7073b8eca670d2c11dece9275947688dc7c859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 23:07:42.394553  142733 cache.go:115] /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1119 23:07:42.394567  142733 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 116.988ยตs
	I1119 23:07:42.394578  142733 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1119 23:07:42.394596  142733 cache.go:87] Successfully saved all images to host disk.
	I1119 23:07:42.394838  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:42.395796  142733 out.go:179] * Verifying Kubernetes components...
	I1119 23:07:42.397077  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:07:42.397151  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:07:42.400663  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:07:42.401297  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:07:42.401366  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:07:42.401574  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:07:42.612769  142733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:07:42.613454  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:07:42.613478  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:07:42.615709  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:07:42.618644  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:07:42.619227  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:07:42.619265  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:07:42.619437  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:07:42.650578  142733 kapi.go:59] client config for ha-487903: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key", CAFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1119 23:07:42.650662  142733 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.15:8443
	I1119 23:07:42.651008  142733 node_ready.go:35] waiting up to 6m0s for node "ha-487903-m04" to be "Ready" ...
	I1119 23:07:42.759664  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:07:42.759695  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:07:42.762502  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:07:42.766101  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:42.766612  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:42.766645  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:42.766903  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/id_rsa Username:docker}
	I1119 23:07:42.916732  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:07:42.916761  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:07:42.919291  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:07:42.922664  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:42.923283  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:42.923322  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:42.923548  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/id_rsa Username:docker}
	I1119 23:07:43.068345  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:07:43.068378  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:07:43.068389  142733 cache_images.go:264] succeeded pushing to: ha-487903 ha-487903-m02 ha-487903-m03 ha-487903-m04
	I1119 23:07:43.156120  142733 node_ready.go:49] node "ha-487903-m04" is "Ready"
	I1119 23:07:43.156156  142733 node_ready.go:38] duration metric: took 505.123719ms for node "ha-487903-m04" to be "Ready" ...
	I1119 23:07:43.156173  142733 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 23:07:43.156241  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 23:07:43.175222  142733 system_svc.go:56] duration metric: took 19.040723ms WaitForService to wait for kubelet
	I1119 23:07:43.175261  142733 kubeadm.go:587] duration metric: took 781.202644ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 23:07:43.175288  142733 node_conditions.go:102] verifying NodePressure condition ...
	I1119 23:07:43.180835  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:07:43.180870  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:07:43.180910  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:07:43.180916  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:07:43.180924  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:07:43.180942  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:07:43.180953  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:07:43.180959  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:07:43.180965  142733 node_conditions.go:105] duration metric: took 5.670636ms to run NodePressure ...
	I1119 23:07:43.180984  142733 start.go:242] waiting for startup goroutines ...
	I1119 23:07:43.181017  142733 start.go:256] writing updated cluster config ...
	I1119 23:07:43.181360  142733 ssh_runner.go:195] Run: rm -f paused
	I1119 23:07:43.187683  142733 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 23:07:43.188308  142733 kapi.go:59] client config for ha-487903: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key", CAFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1119 23:07:43.202770  142733 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5gt2t" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.210054  142733 pod_ready.go:94] pod "coredns-66bc5c9577-5gt2t" is "Ready"
	I1119 23:07:43.210077  142733 pod_ready.go:86] duration metric: took 7.281319ms for pod "coredns-66bc5c9577-5gt2t" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.210085  142733 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zjxkb" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.216456  142733 pod_ready.go:94] pod "coredns-66bc5c9577-zjxkb" is "Ready"
	I1119 23:07:43.216477  142733 pod_ready.go:86] duration metric: took 6.387459ms for pod "coredns-66bc5c9577-zjxkb" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.220711  142733 pod_ready.go:83] waiting for pod "etcd-ha-487903" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.230473  142733 pod_ready.go:94] pod "etcd-ha-487903" is "Ready"
	I1119 23:07:43.230503  142733 pod_ready.go:86] duration metric: took 9.759051ms for pod "etcd-ha-487903" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.230514  142733 pod_ready.go:83] waiting for pod "etcd-ha-487903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.238350  142733 pod_ready.go:94] pod "etcd-ha-487903-m02" is "Ready"
	I1119 23:07:43.238386  142733 pod_ready.go:86] duration metric: took 7.863104ms for pod "etcd-ha-487903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.238400  142733 pod_ready.go:83] waiting for pod "etcd-ha-487903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.389841  142733 request.go:683] "Waited before sending request" delay="151.318256ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-487903-m03"
	I1119 23:07:43.588929  142733 request.go:683] "Waited before sending request" delay="193.203585ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m03"
	I1119 23:07:43.592859  142733 pod_ready.go:94] pod "etcd-ha-487903-m03" is "Ready"
	I1119 23:07:43.592895  142733 pod_ready.go:86] duration metric: took 354.487844ms for pod "etcd-ha-487903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.789462  142733 request.go:683] "Waited before sending request" delay="196.405608ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1119 23:07:43.797307  142733 pod_ready.go:83] waiting for pod "kube-apiserver-ha-487903" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.989812  142733 request.go:683] "Waited before sending request" delay="192.389949ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-487903"
	I1119 23:07:44.189117  142733 request.go:683] "Waited before sending request" delay="193.300165ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903"
	I1119 23:07:44.194456  142733 pod_ready.go:94] pod "kube-apiserver-ha-487903" is "Ready"
	I1119 23:07:44.194483  142733 pod_ready.go:86] duration metric: took 397.15415ms for pod "kube-apiserver-ha-487903" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:44.194492  142733 pod_ready.go:83] waiting for pod "kube-apiserver-ha-487903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:44.388959  142733 request.go:683] "Waited before sending request" delay="194.329528ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-487903-m02"
	I1119 23:07:44.589884  142733 request.go:683] "Waited before sending request" delay="195.382546ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m02"
	I1119 23:07:44.596472  142733 pod_ready.go:94] pod "kube-apiserver-ha-487903-m02" is "Ready"
	I1119 23:07:44.596506  142733 pod_ready.go:86] duration metric: took 402.007843ms for pod "kube-apiserver-ha-487903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:44.596519  142733 pod_ready.go:83] waiting for pod "kube-apiserver-ha-487903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:44.788946  142733 request.go:683] "Waited before sending request" delay="192.297042ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-487903-m03"
	I1119 23:07:44.988960  142733 request.go:683] "Waited before sending request" delay="194.310641ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m03"
	I1119 23:07:44.996400  142733 pod_ready.go:94] pod "kube-apiserver-ha-487903-m03" is "Ready"
	I1119 23:07:44.996441  142733 pod_ready.go:86] duration metric: took 399.911723ms for pod "kube-apiserver-ha-487903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:45.188855  142733 request.go:683] "Waited before sending request" delay="192.290488ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1119 23:07:45.196689  142733 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-487903" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:45.389182  142733 request.go:683] "Waited before sending request" delay="192.281881ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-487903"
	I1119 23:07:45.589591  142733 request.go:683] "Waited before sending request" delay="194.384266ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903"
	I1119 23:07:45.595629  142733 pod_ready.go:94] pod "kube-controller-manager-ha-487903" is "Ready"
	I1119 23:07:45.595661  142733 pod_ready.go:86] duration metric: took 398.942038ms for pod "kube-controller-manager-ha-487903" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:45.595674  142733 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-487903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:45.789154  142733 request.go:683] "Waited before sending request" delay="193.378185ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-487903-m02"
	I1119 23:07:45.989593  142733 request.go:683] "Waited before sending request" delay="195.373906ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m02"
	I1119 23:07:45.995418  142733 pod_ready.go:94] pod "kube-controller-manager-ha-487903-m02" is "Ready"
	I1119 23:07:45.995451  142733 pod_ready.go:86] duration metric: took 399.769417ms for pod "kube-controller-manager-ha-487903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:45.995462  142733 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-487903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:46.188855  142733 request.go:683] "Waited before sending request" delay="193.309398ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-487903-m03"
	I1119 23:07:46.389512  142733 request.go:683] "Waited before sending request" delay="194.260664ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m03"
	I1119 23:07:46.394287  142733 pod_ready.go:94] pod "kube-controller-manager-ha-487903-m03" is "Ready"
	I1119 23:07:46.394312  142733 pod_ready.go:86] duration metric: took 398.844264ms for pod "kube-controller-manager-ha-487903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:46.589870  142733 request.go:683] "Waited before sending request" delay="195.416046ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1119 23:07:46.597188  142733 pod_ready.go:83] waiting for pod "kube-proxy-77wjf" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:46.789771  142733 request.go:683] "Waited before sending request" delay="192.426623ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-77wjf"
	I1119 23:07:46.989150  142733 request.go:683] "Waited before sending request" delay="193.435229ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m02"
	I1119 23:07:46.993720  142733 pod_ready.go:94] pod "kube-proxy-77wjf" is "Ready"
	I1119 23:07:46.993753  142733 pod_ready.go:86] duration metric: took 396.52945ms for pod "kube-proxy-77wjf" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:46.993765  142733 pod_ready.go:83] waiting for pod "kube-proxy-fk7mh" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:47.189146  142733 request.go:683] "Waited before sending request" delay="195.267437ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fk7mh"
	I1119 23:07:47.388849  142733 request.go:683] "Waited before sending request" delay="192.29395ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903"
	I1119 23:07:47.395640  142733 pod_ready.go:94] pod "kube-proxy-fk7mh" is "Ready"
	I1119 23:07:47.395670  142733 pod_ready.go:86] duration metric: took 401.897062ms for pod "kube-proxy-fk7mh" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:47.395683  142733 pod_ready.go:83] waiting for pod "kube-proxy-tkx9r" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:47.589099  142733 request.go:683] "Waited before sending request" delay="193.31568ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tkx9r"
	I1119 23:07:47.789418  142733 request.go:683] "Waited before sending request" delay="195.323511ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m03"
	I1119 23:07:47.795048  142733 pod_ready.go:94] pod "kube-proxy-tkx9r" is "Ready"
	I1119 23:07:47.795078  142733 pod_ready.go:86] duration metric: took 399.387799ms for pod "kube-proxy-tkx9r" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:47.795088  142733 pod_ready.go:83] waiting for pod "kube-proxy-zxtk6" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:47.989569  142733 request.go:683] "Waited before sending request" delay="194.336733ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zxtk6"
	I1119 23:07:48.189017  142733 request.go:683] "Waited before sending request" delay="192.313826ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m04"
	I1119 23:07:48.194394  142733 pod_ready.go:94] pod "kube-proxy-zxtk6" is "Ready"
	I1119 23:07:48.194435  142733 pod_ready.go:86] duration metric: took 399.338885ms for pod "kube-proxy-zxtk6" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:48.388945  142733 request.go:683] "Waited before sending request" delay="194.328429ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1119 23:07:48.555571  142733 pod_ready.go:83] waiting for pod "kube-scheduler-ha-487903" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:48.789654  142733 request.go:683] "Waited before sending request" delay="195.382731ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903"
	I1119 23:07:48.795196  142733 pod_ready.go:94] pod "kube-scheduler-ha-487903" is "Ready"
	I1119 23:07:48.795234  142733 pod_ready.go:86] duration metric: took 239.629107ms for pod "kube-scheduler-ha-487903" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:48.795246  142733 pod_ready.go:83] waiting for pod "kube-scheduler-ha-487903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:48.989712  142733 request.go:683] "Waited before sending request" delay="194.356732ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-487903-m02"
	I1119 23:07:49.189524  142733 request.go:683] "Waited before sending request" delay="194.365482ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m02"
	I1119 23:07:49.195480  142733 pod_ready.go:94] pod "kube-scheduler-ha-487903-m02" is "Ready"
	I1119 23:07:49.195503  142733 pod_ready.go:86] duration metric: took 400.248702ms for pod "kube-scheduler-ha-487903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:49.195512  142733 pod_ready.go:83] waiting for pod "kube-scheduler-ha-487903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:49.388917  142733 request.go:683] "Waited before sending request" delay="193.285895ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-487903-m03"
	I1119 23:07:49.589644  142733 request.go:683] "Waited before sending request" delay="195.362698ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m03"
	I1119 23:07:49.594210  142733 pod_ready.go:94] pod "kube-scheduler-ha-487903-m03" is "Ready"
	I1119 23:07:49.594248  142733 pod_ready.go:86] duration metric: took 398.725567ms for pod "kube-scheduler-ha-487903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:49.594266  142733 pod_ready.go:40] duration metric: took 6.406545371s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 23:07:49.639756  142733 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 23:07:49.641778  142733 out.go:179] * Done! kubectl is now configured to use "ha-487903" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 23:09:15 ha-487903 crio[1051]: time="2025-11-19 23:09:15.852642503Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763593755852616302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1225d59c-996e-4a5b-806e-7a4a053bb131 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:09:15 ha-487903 crio[1051]: time="2025-11-19 23:09:15.853490754Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9625387b-e859-4c57-826d-c6f827ce4678 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:09:15 ha-487903 crio[1051]: time="2025-11-19 23:09:15.853567901Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9625387b-e859-4c57-826d-c6f827ce4678 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:09:15 ha-487903 crio[1051]: time="2025-11-19 23:09:15.854219900Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6554703e81880e9f6a21acce786b79533e2a7c53bb8f205f8cf58b3e7f2602bf,PodSandboxId:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763593636083086322,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ecabad51ca157f947db36f5eba7ab52cd91201da10aac9ace5e0c4be262b48,PodSandboxId:270bc5025a2089cafec093032b0ebbddd76667a9be6dc17be0d73dcc82585702,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1763593607117114313,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7b57f96db7-vl8nf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 946ad3f6-2e30-4020-9558-891d0523c640,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4db302f8e1d90f14b4a4931bf9263114ec245c8993010f4eed63ba0b2ff9d17,PodSandboxId:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763593604195293709,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:671e74cfb90edb492f69d2062ab55e74926857b9da6b2bd43dc676ca77a34e95,PodSandboxId:21cea62c9e5ab973727fcb21a109a1a976f2836eed75c3f56119049c06989cb9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,State:CONTAINER_RUNNING,CreatedAt:1763593603580117075,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p9nqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dd7683b-c7e7-487c-904a-506a24f833d8,},Annotations:map[string]string{io.kubernetes.container.hash: 127fdb84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e1ce69b078fd9d69a966ab3ae75b0a9a20f1716235f7fd123429ac0fe73bb0b,PodSandboxId:2d9689b8c4fc51a9b8407de5c6767ecfaee3ba4e33596524328f2bcf5d4107fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763593603411005301,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fk7mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8743ca8a-c5e5-4da6-a983-a6191d2a852a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationM
essagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3b8bef3853f0cd8da58227fc3508657f73899cd5de505b5b41e40a5be41f1f,PodSandboxId:02cf6c2f51b7a2e4710eaf470b1a546e8dfca228fd615b28c173739ebc295404,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763593603688687557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zjxkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics
\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c3e00977ee5764d9f023ed9d6a8cff1ae7f4fd8b5be7f36bf0c296650100b,PodSandboxId:acd42dfb49d39f9eed1c02ddf5511235d87334eb812770e7fb44e4707605eadf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763593603469651002,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5gt2t,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bca7b-6369-4f70-b467-829bf0c07711,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407c1906949db307229dfea169cf035af7ee02aa72d4a48566dbe3b8c43189a6,PodSandboxId:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc2152
7912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763593602977408281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a3ebfa791420fd99d627281216572fc41635c59a2e823681f10e0265a5601af,PodSandboxId:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaa
ff2a321dfb6e26b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763593593634321911,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f74b446d5d8cfbd14bd87db207
415a01d3d5d28a2cb58f4b03cd16328535c63,PodSandboxId:aadb913b7f2aa359387ce202375bc4784c495957a61cd14ac26b05dc78dcf173,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce5ff7916975f8df7d464b4e56a967792442b9e79944fd74684cc749b281df38,State:CONTAINER_RUNNING,CreatedAt:1763593573545391057,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2660d05a38d7f409ed63a1278c85d94,},Annotations:map[string]string{io.kubernetes.container.hash: 7c465d42,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fead33c061a4deb0b1eb4ee9dd3e9e724
dade2871a97a7aad79bef05acbd4a07,PodSandboxId:83240b63d40d6dadf9466b9a99213868b8f23ea48c80e114d09241c058d55ba1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763593571304452021,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03498ab9918acb57128aa1e7f285fe26,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d9fc5b2567d02f5794d850ec1778030b3378c830c952d472803d0582d5904b,PodSandboxId:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763593571289637293,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restar
tCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:361486fad16d169560efd4b5ba18c9754dd222d05d290599faad7b9f7ef4862e,PodSandboxId:2ea97d68a5406f530212285ae86cbed35cbeddb5975f19b02bd01f6965702dba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763593571267392885,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85430ee602aa7edb190bbc4c6f215cf4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"cont
ainerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37548c727f81afd8dd40f5cf29964a61c4e4370cbb603376f96287d018620cb4,PodSandboxId:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaaff2a321dfb6e26b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763593571195978986,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,},Annotations:map[string]string{io.kubernetes.contain
er.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9625387b-e859-4c57-826d-c6f827ce4678 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:09:15 ha-487903 crio[1051]: time="2025-11-19 23:09:15.915604285Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=61d11654-526d-490d-b2c9-385510d93694 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:09:15 ha-487903 crio[1051]: time="2025-11-19 23:09:15.915680111Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=61d11654-526d-490d-b2c9-385510d93694 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:09:15 ha-487903 crio[1051]: time="2025-11-19 23:09:15.917575061Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=215e320d-1f79-46fc-a26f-27f81bca5d57 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:09:15 ha-487903 crio[1051]: time="2025-11-19 23:09:15.918314738Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763593755918286500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=215e320d-1f79-46fc-a26f-27f81bca5d57 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:09:15 ha-487903 crio[1051]: time="2025-11-19 23:09:15.919073120Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a2a946e-5d4c-4080-98b3-05ab6f8c377a name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:09:15 ha-487903 crio[1051]: time="2025-11-19 23:09:15.919151725Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a2a946e-5d4c-4080-98b3-05ab6f8c377a name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:09:15 ha-487903 crio[1051]: time="2025-11-19 23:09:15.919506760Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6554703e81880e9f6a21acce786b79533e2a7c53bb8f205f8cf58b3e7f2602bf,PodSandboxId:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763593636083086322,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ecabad51ca157f947db36f5eba7ab52cd91201da10aac9ace5e0c4be262b48,PodSandboxId:270bc5025a2089cafec093032b0ebbddd76667a9be6dc17be0d73dcc82585702,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1763593607117114313,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7b57f96db7-vl8nf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 946ad3f6-2e30-4020-9558-891d0523c640,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4db302f8e1d90f14b4a4931bf9263114ec245c8993010f4eed63ba0b2ff9d17,PodSandboxId:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763593604195293709,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:671e74cfb90edb492f69d2062ab55e74926857b9da6b2bd43dc676ca77a34e95,PodSandboxId:21cea62c9e5ab973727fcb21a109a1a976f2836eed75c3f56119049c06989cb9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,State:CONTAINER_RUNNING,CreatedAt:1763593603580117075,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p9nqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dd7683b-c7e7-487c-904a-506a24f833d8,},Annotations:map[string]string{io.kubernetes.container.hash: 127fdb84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e1ce69b078fd9d69a966ab3ae75b0a9a20f1716235f7fd123429ac0fe73bb0b,PodSandboxId:2d9689b8c4fc51a9b8407de5c6767ecfaee3ba4e33596524328f2bcf5d4107fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763593603411005301,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fk7mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8743ca8a-c5e5-4da6-a983-a6191d2a852a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationM
essagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3b8bef3853f0cd8da58227fc3508657f73899cd5de505b5b41e40a5be41f1f,PodSandboxId:02cf6c2f51b7a2e4710eaf470b1a546e8dfca228fd615b28c173739ebc295404,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763593603688687557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zjxkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics
\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c3e00977ee5764d9f023ed9d6a8cff1ae7f4fd8b5be7f36bf0c296650100b,PodSandboxId:acd42dfb49d39f9eed1c02ddf5511235d87334eb812770e7fb44e4707605eadf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763593603469651002,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5gt2t,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bca7b-6369-4f70-b467-829bf0c07711,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407c1906949db307229dfea169cf035af7ee02aa72d4a48566dbe3b8c43189a6,PodSandboxId:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc2152
7912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763593602977408281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a3ebfa791420fd99d627281216572fc41635c59a2e823681f10e0265a5601af,PodSandboxId:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaa
ff2a321dfb6e26b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763593593634321911,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f74b446d5d8cfbd14bd87db207
415a01d3d5d28a2cb58f4b03cd16328535c63,PodSandboxId:aadb913b7f2aa359387ce202375bc4784c495957a61cd14ac26b05dc78dcf173,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce5ff7916975f8df7d464b4e56a967792442b9e79944fd74684cc749b281df38,State:CONTAINER_RUNNING,CreatedAt:1763593573545391057,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2660d05a38d7f409ed63a1278c85d94,},Annotations:map[string]string{io.kubernetes.container.hash: 7c465d42,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fead33c061a4deb0b1eb4ee9dd3e9e724
dade2871a97a7aad79bef05acbd4a07,PodSandboxId:83240b63d40d6dadf9466b9a99213868b8f23ea48c80e114d09241c058d55ba1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763593571304452021,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03498ab9918acb57128aa1e7f285fe26,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d9fc5b2567d02f5794d850ec1778030b3378c830c952d472803d0582d5904b,PodSandboxId:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763593571289637293,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restar
tCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:361486fad16d169560efd4b5ba18c9754dd222d05d290599faad7b9f7ef4862e,PodSandboxId:2ea97d68a5406f530212285ae86cbed35cbeddb5975f19b02bd01f6965702dba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763593571267392885,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85430ee602aa7edb190bbc4c6f215cf4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"cont
ainerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37548c727f81afd8dd40f5cf29964a61c4e4370cbb603376f96287d018620cb4,PodSandboxId:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaaff2a321dfb6e26b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763593571195978986,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,},Annotations:map[string]string{io.kubernetes.contain
er.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a2a946e-5d4c-4080-98b3-05ab6f8c377a name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:09:15 ha-487903 crio[1051]: time="2025-11-19 23:09:15.976970371Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=23a3956e-227d-411e-9a79-eed1790fac53 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:09:15 ha-487903 crio[1051]: time="2025-11-19 23:09:15.977073328Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=23a3956e-227d-411e-9a79-eed1790fac53 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:09:15 ha-487903 crio[1051]: time="2025-11-19 23:09:15.979448329Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=249af2a1-c948-4d41-b43b-0f5620153035 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:09:15 ha-487903 crio[1051]: time="2025-11-19 23:09:15.980066508Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763593755980042000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=249af2a1-c948-4d41-b43b-0f5620153035 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:09:15 ha-487903 crio[1051]: time="2025-11-19 23:09:15.981398524Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=194aa355-1cb5-4955-bc91-756596845206 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:09:15 ha-487903 crio[1051]: time="2025-11-19 23:09:15.981527272Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=194aa355-1cb5-4955-bc91-756596845206 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:09:15 ha-487903 crio[1051]: time="2025-11-19 23:09:15.982134107Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6554703e81880e9f6a21acce786b79533e2a7c53bb8f205f8cf58b3e7f2602bf,PodSandboxId:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763593636083086322,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ecabad51ca157f947db36f5eba7ab52cd91201da10aac9ace5e0c4be262b48,PodSandboxId:270bc5025a2089cafec093032b0ebbddd76667a9be6dc17be0d73dcc82585702,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1763593607117114313,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7b57f96db7-vl8nf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 946ad3f6-2e30-4020-9558-891d0523c640,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4db302f8e1d90f14b4a4931bf9263114ec245c8993010f4eed63ba0b2ff9d17,PodSandboxId:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763593604195293709,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:671e74cfb90edb492f69d2062ab55e74926857b9da6b2bd43dc676ca77a34e95,PodSandboxId:21cea62c9e5ab973727fcb21a109a1a976f2836eed75c3f56119049c06989cb9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,State:CONTAINER_RUNNING,CreatedAt:1763593603580117075,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p9nqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dd7683b-c7e7-487c-904a-506a24f833d8,},Annotations:map[string]string{io.kubernetes.container.hash: 127fdb84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e1ce69b078fd9d69a966ab3ae75b0a9a20f1716235f7fd123429ac0fe73bb0b,PodSandboxId:2d9689b8c4fc51a9b8407de5c6767ecfaee3ba4e33596524328f2bcf5d4107fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763593603411005301,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fk7mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8743ca8a-c5e5-4da6-a983-a6191d2a852a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationM
essagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3b8bef3853f0cd8da58227fc3508657f73899cd5de505b5b41e40a5be41f1f,PodSandboxId:02cf6c2f51b7a2e4710eaf470b1a546e8dfca228fd615b28c173739ebc295404,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763593603688687557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zjxkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics
\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c3e00977ee5764d9f023ed9d6a8cff1ae7f4fd8b5be7f36bf0c296650100b,PodSandboxId:acd42dfb49d39f9eed1c02ddf5511235d87334eb812770e7fb44e4707605eadf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763593603469651002,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5gt2t,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bca7b-6369-4f70-b467-829bf0c07711,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407c1906949db307229dfea169cf035af7ee02aa72d4a48566dbe3b8c43189a6,PodSandboxId:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc2152
7912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763593602977408281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a3ebfa791420fd99d627281216572fc41635c59a2e823681f10e0265a5601af,PodSandboxId:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaa
ff2a321dfb6e26b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763593593634321911,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f74b446d5d8cfbd14bd87db207
415a01d3d5d28a2cb58f4b03cd16328535c63,PodSandboxId:aadb913b7f2aa359387ce202375bc4784c495957a61cd14ac26b05dc78dcf173,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce5ff7916975f8df7d464b4e56a967792442b9e79944fd74684cc749b281df38,State:CONTAINER_RUNNING,CreatedAt:1763593573545391057,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2660d05a38d7f409ed63a1278c85d94,},Annotations:map[string]string{io.kubernetes.container.hash: 7c465d42,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fead33c061a4deb0b1eb4ee9dd3e9e724
dade2871a97a7aad79bef05acbd4a07,PodSandboxId:83240b63d40d6dadf9466b9a99213868b8f23ea48c80e114d09241c058d55ba1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763593571304452021,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03498ab9918acb57128aa1e7f285fe26,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d9fc5b2567d02f5794d850ec1778030b3378c830c952d472803d0582d5904b,PodSandboxId:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763593571289637293,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restar
tCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:361486fad16d169560efd4b5ba18c9754dd222d05d290599faad7b9f7ef4862e,PodSandboxId:2ea97d68a5406f530212285ae86cbed35cbeddb5975f19b02bd01f6965702dba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763593571267392885,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85430ee602aa7edb190bbc4c6f215cf4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"cont
ainerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37548c727f81afd8dd40f5cf29964a61c4e4370cbb603376f96287d018620cb4,PodSandboxId:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaaff2a321dfb6e26b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763593571195978986,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,},Annotations:map[string]string{io.kubernetes.contain
er.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=194aa355-1cb5-4955-bc91-756596845206 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:09:16 ha-487903 crio[1051]: time="2025-11-19 23:09:16.049525495Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=88a0968f-31e7-4db3-ac1d-4dd8df7aa413 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:09:16 ha-487903 crio[1051]: time="2025-11-19 23:09:16.049626863Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=88a0968f-31e7-4db3-ac1d-4dd8df7aa413 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:09:16 ha-487903 crio[1051]: time="2025-11-19 23:09:16.052253811Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0cde6ed4-6518-4004-b8be-b63df15fa25d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:09:16 ha-487903 crio[1051]: time="2025-11-19 23:09:16.054159478Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763593756054133313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0cde6ed4-6518-4004-b8be-b63df15fa25d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:09:16 ha-487903 crio[1051]: time="2025-11-19 23:09:16.054825488Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b15a7fad-93dc-47ec-a40f-9092b3b6e7f1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:09:16 ha-487903 crio[1051]: time="2025-11-19 23:09:16.054979662Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b15a7fad-93dc-47ec-a40f-9092b3b6e7f1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:09:16 ha-487903 crio[1051]: time="2025-11-19 23:09:16.055941448Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6554703e81880e9f6a21acce786b79533e2a7c53bb8f205f8cf58b3e7f2602bf,PodSandboxId:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763593636083086322,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ecabad51ca157f947db36f5eba7ab52cd91201da10aac9ace5e0c4be262b48,PodSandboxId:270bc5025a2089cafec093032b0ebbddd76667a9be6dc17be0d73dcc82585702,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1763593607117114313,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7b57f96db7-vl8nf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 946ad3f6-2e30-4020-9558-891d0523c640,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4db302f8e1d90f14b4a4931bf9263114ec245c8993010f4eed63ba0b2ff9d17,PodSandboxId:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763593604195293709,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:671e74cfb90edb492f69d2062ab55e74926857b9da6b2bd43dc676ca77a34e95,PodSandboxId:21cea62c9e5ab973727fcb21a109a1a976f2836eed75c3f56119049c06989cb9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,State:CONTAINER_RUNNING,CreatedAt:1763593603580117075,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p9nqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dd7683b-c7e7-487c-904a-506a24f833d8,},Annotations:map[string]string{io.kubernetes.container.hash: 127fdb84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e1ce69b078fd9d69a966ab3ae75b0a9a20f1716235f7fd123429ac0fe73bb0b,PodSandboxId:2d9689b8c4fc51a9b8407de5c6767ecfaee3ba4e33596524328f2bcf5d4107fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763593603411005301,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fk7mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8743ca8a-c5e5-4da6-a983-a6191d2a852a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationM
essagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3b8bef3853f0cd8da58227fc3508657f73899cd5de505b5b41e40a5be41f1f,PodSandboxId:02cf6c2f51b7a2e4710eaf470b1a546e8dfca228fd615b28c173739ebc295404,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763593603688687557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zjxkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics
\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c3e00977ee5764d9f023ed9d6a8cff1ae7f4fd8b5be7f36bf0c296650100b,PodSandboxId:acd42dfb49d39f9eed1c02ddf5511235d87334eb812770e7fb44e4707605eadf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763593603469651002,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5gt2t,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bca7b-6369-4f70-b467-829bf0c07711,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407c1906949db307229dfea169cf035af7ee02aa72d4a48566dbe3b8c43189a6,PodSandboxId:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc2152
7912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763593602977408281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a3ebfa791420fd99d627281216572fc41635c59a2e823681f10e0265a5601af,PodSandboxId:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaa
ff2a321dfb6e26b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763593593634321911,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f74b446d5d8cfbd14bd87db207
415a01d3d5d28a2cb58f4b03cd16328535c63,PodSandboxId:aadb913b7f2aa359387ce202375bc4784c495957a61cd14ac26b05dc78dcf173,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce5ff7916975f8df7d464b4e56a967792442b9e79944fd74684cc749b281df38,State:CONTAINER_RUNNING,CreatedAt:1763593573545391057,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2660d05a38d7f409ed63a1278c85d94,},Annotations:map[string]string{io.kubernetes.container.hash: 7c465d42,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fead33c061a4deb0b1eb4ee9dd3e9e724
dade2871a97a7aad79bef05acbd4a07,PodSandboxId:83240b63d40d6dadf9466b9a99213868b8f23ea48c80e114d09241c058d55ba1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763593571304452021,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03498ab9918acb57128aa1e7f285fe26,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d9fc5b2567d02f5794d850ec1778030b3378c830c952d472803d0582d5904b,PodSandboxId:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763593571289637293,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restar
tCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:361486fad16d169560efd4b5ba18c9754dd222d05d290599faad7b9f7ef4862e,PodSandboxId:2ea97d68a5406f530212285ae86cbed35cbeddb5975f19b02bd01f6965702dba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763593571267392885,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85430ee602aa7edb190bbc4c6f215cf4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"cont
ainerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37548c727f81afd8dd40f5cf29964a61c4e4370cbb603376f96287d018620cb4,PodSandboxId:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaaff2a321dfb6e26b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763593571195978986,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,},Annotations:map[string]string{io.kubernetes.contain
er.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b15a7fad-93dc-47ec-a40f-9092b3b6e7f1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6554703e81880       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago       Running             storage-provisioner       4                   bcf53581b6e1f       storage-provisioner
	08ecabad51ca1       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   2 minutes ago       Running             busybox                   1                   270bc5025a208       busybox-7b57f96db7-vl8nf
	f4db302f8e1d9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago       Exited              storage-provisioner       3                   bcf53581b6e1f       storage-provisioner
	cf3b8bef3853f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      2 minutes ago       Running             coredns                   1                   02cf6c2f51b7a       coredns-66bc5c9577-zjxkb
	671e74cfb90ed       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      2 minutes ago       Running             kindnet-cni               1                   21cea62c9e5ab       kindnet-p9nqh
	323c3e00977ee       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      2 minutes ago       Running             coredns                   1                   acd42dfb49d39       coredns-66bc5c9577-5gt2t
	8e1ce69b078fd       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      2 minutes ago       Running             kube-proxy                1                   2d9689b8c4fc5       kube-proxy-fk7mh
	407c1906949db       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      2 minutes ago       Running             kube-controller-manager   2                   a4df466e854f6       kube-controller-manager-ha-487903
	0a3ebfa791420       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      2 minutes ago       Running             kube-apiserver            2                   6d84027fd8d6f       kube-apiserver-ha-487903
	9f74b446d5d8c       ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178     3 minutes ago       Running             kube-vip                  1                   aadb913b7f2aa       kube-vip-ha-487903
	fead33c061a4d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      3 minutes ago       Running             kube-scheduler            1                   83240b63d40d6       kube-scheduler-ha-487903
	b7d9fc5b2567d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      3 minutes ago       Exited              kube-controller-manager   1                   a4df466e854f6       kube-controller-manager-ha-487903
	361486fad16d1       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      3 minutes ago       Running             etcd                      1                   2ea97d68a5406       etcd-ha-487903
	37548c727f81a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      3 minutes ago       Exited              kube-apiserver            1                   6d84027fd8d6f       kube-apiserver-ha-487903
	
	
	==> coredns [323c3e00977ee5764d9f023ed9d6a8cff1ae7f4fd8b5be7f36bf0c296650100b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51076 - 6967 "HINFO IN 7389388171048239250.1605567939079731882. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.415536075s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [cf3b8bef3853f0cd8da58227fc3508657f73899cd5de505b5b41e40a5be41f1f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44386 - 47339 "HINFO IN 5025386377785033151.6368126768169479003. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.417913634s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-487903
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-487903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=ha-487903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_48_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:47:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-487903
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 23:09:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 23:07:12 +0000   Wed, 19 Nov 2025 22:47:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 23:07:12 +0000   Wed, 19 Nov 2025 22:47:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 23:07:12 +0000   Wed, 19 Nov 2025 22:47:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 23:07:12 +0000   Wed, 19 Nov 2025 22:48:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.15
	  Hostname:    ha-487903
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 a1ad91e99cee4f2a89ceda034e4410c0
	  System UUID:                a1ad91e9-9cee-4f2a-89ce-da034e4410c0
	  Boot ID:                    1b20db97-3ea3-483b-aa28-0753781928f2
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-vl8nf             0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 coredns-66bc5c9577-5gt2t             100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     21m
	  kube-system                 coredns-66bc5c9577-zjxkb             100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     21m
	  kube-system                 etcd-ha-487903                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         21m
	  kube-system                 kindnet-p9nqh                        100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      21m
	  kube-system                 kube-apiserver-ha-487903             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-ha-487903    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-fk7mh                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-ha-487903             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-vip-ha-487903                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (9%)  390Mi (13%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 21m                  kube-proxy       
	  Normal   Starting                 2m30s                kube-proxy       
	  Normal   NodeAllocatableEnforced  21m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 21m                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  21m (x8 over 21m)    kubelet          Node ha-487903 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21m (x8 over 21m)    kubelet          Node ha-487903 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21m (x7 over 21m)    kubelet          Node ha-487903 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 21m                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     21m                  kubelet          Node ha-487903 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  21m                  kubelet          Node ha-487903 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21m                  kubelet          Node ha-487903 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           21m                  node-controller  Node ha-487903 event: Registered Node ha-487903 in Controller
	  Normal   NodeReady                20m                  kubelet          Node ha-487903 status is now: NodeReady
	  Normal   RegisteredNode           20m                  node-controller  Node ha-487903 event: Registered Node ha-487903 in Controller
	  Normal   RegisteredNode           18m                  node-controller  Node ha-487903 event: Registered Node ha-487903 in Controller
	  Normal   RegisteredNode           14m                  node-controller  Node ha-487903 event: Registered Node ha-487903 in Controller
	  Normal   NodeHasSufficientPID     3m6s (x7 over 3m6s)  kubelet          Node ha-487903 status is now: NodeHasSufficientPID
	  Normal   Starting                 3m6s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  3m6s (x8 over 3m6s)  kubelet          Node ha-487903 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m6s (x8 over 3m6s)  kubelet          Node ha-487903 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  3m6s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 2m35s                kubelet          Node ha-487903 has been rebooted, boot id: 1b20db97-3ea3-483b-aa28-0753781928f2
	  Normal   RegisteredNode           2m29s                node-controller  Node ha-487903 event: Registered Node ha-487903 in Controller
	  Normal   RegisteredNode           2m27s                node-controller  Node ha-487903 event: Registered Node ha-487903 in Controller
	  Normal   RegisteredNode           111s                 node-controller  Node ha-487903 event: Registered Node ha-487903 in Controller
	  Normal   RegisteredNode           21s                  node-controller  Node ha-487903 event: Registered Node ha-487903 in Controller
	
	
	Name:               ha-487903-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-487903-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=ha-487903
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_19T22_49_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:49:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-487903-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 23:09:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 23:07:06 +0000   Wed, 19 Nov 2025 22:54:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 23:07:06 +0000   Wed, 19 Nov 2025 22:54:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 23:07:06 +0000   Wed, 19 Nov 2025 22:54:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 23:07:06 +0000   Wed, 19 Nov 2025 22:54:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.191
	  Hostname:    ha-487903-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 dcc51fc7a2ff40ae988dda36299d6bbc
	  System UUID:                dcc51fc7-a2ff-40ae-988d-da36299d6bbc
	  Boot ID:                    6ad68891-6365-45be-8b40-3a4d3c73c34d
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-xjvfn                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 etcd-ha-487903-m02                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         20m
	  kube-system                 kindnet-9zx8x                            100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      20m
	  kube-system                 kube-apiserver-ha-487903-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-ha-487903-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-77wjf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-ha-487903-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-vip-ha-487903-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (5%)  50Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m23s                  kube-proxy       
	  Normal   Starting                 19m                    kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   RegisteredNode           20m                    node-controller  Node ha-487903-m02 event: Registered Node ha-487903-m02 in Controller
	  Normal   RegisteredNode           20m                    node-controller  Node ha-487903-m02 event: Registered Node ha-487903-m02 in Controller
	  Normal   RegisteredNode           18m                    node-controller  Node ha-487903-m02 event: Registered Node ha-487903-m02 in Controller
	  Normal   NodeNotReady             16m                    node-controller  Node ha-487903-m02 status is now: NodeNotReady
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-487903-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-487903-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-487903-m02 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 14m                    kubelet          Node ha-487903-m02 has been rebooted, boot id: e9c055dc-1db9-46bb-aebb-1872d4771aa9
	  Normal   RegisteredNode           14m                    node-controller  Node ha-487903-m02 event: Registered Node ha-487903-m02 in Controller
	  Normal   NodeHasSufficientMemory  2m44s (x8 over 2m44s)  kubelet          Node ha-487903-m02 status is now: NodeHasSufficientMemory
	  Normal   Starting                 2m44s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m44s (x8 over 2m44s)  kubelet          Node ha-487903-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m44s (x7 over 2m44s)  kubelet          Node ha-487903-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  2m44s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 2m30s                  kubelet          Node ha-487903-m02 has been rebooted, boot id: 6ad68891-6365-45be-8b40-3a4d3c73c34d
	  Normal   RegisteredNode           2m29s                  node-controller  Node ha-487903-m02 event: Registered Node ha-487903-m02 in Controller
	  Normal   RegisteredNode           2m27s                  node-controller  Node ha-487903-m02 event: Registered Node ha-487903-m02 in Controller
	  Normal   RegisteredNode           111s                   node-controller  Node ha-487903-m02 event: Registered Node ha-487903-m02 in Controller
	  Normal   RegisteredNode           21s                    node-controller  Node ha-487903-m02 event: Registered Node ha-487903-m02 in Controller
	
	
	Name:               ha-487903-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-487903-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=ha-487903
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_19T22_50_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:50:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-487903-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 23:09:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 23:07:40 +0000   Wed, 19 Nov 2025 23:07:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 23:07:40 +0000   Wed, 19 Nov 2025 23:07:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 23:07:40 +0000   Wed, 19 Nov 2025 23:07:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 23:07:40 +0000   Wed, 19 Nov 2025 23:07:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.160
	  Hostname:    ha-487903-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 e9ddbb3bf8b54cd48c27cb1452f23fd2
	  System UUID:                e9ddbb3b-f8b5-4cd4-8c27-cb1452f23fd2
	  Boot ID:                    ebee6c5a-099c-4845-b6bc-e5686cb73f0c
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-6q5gq                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 etcd-ha-487903-m03                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         18m
	  kube-system                 kindnet-kslhw                            100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      18m
	  kube-system                 kube-apiserver-ha-487903-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-ha-487903-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-tkx9r                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-ha-487903-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-vip-ha-487903-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (5%)  50Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 18m                  kube-proxy       
	  Normal   Starting                 107s                 kube-proxy       
	  Normal   RegisteredNode           18m                  node-controller  Node ha-487903-m03 event: Registered Node ha-487903-m03 in Controller
	  Normal   RegisteredNode           18m                  node-controller  Node ha-487903-m03 event: Registered Node ha-487903-m03 in Controller
	  Normal   RegisteredNode           18m                  node-controller  Node ha-487903-m03 event: Registered Node ha-487903-m03 in Controller
	  Normal   RegisteredNode           14m                  node-controller  Node ha-487903-m03 event: Registered Node ha-487903-m03 in Controller
	  Normal   NodeNotReady             13m                  node-controller  Node ha-487903-m03 status is now: NodeNotReady
	  Normal   RegisteredNode           2m29s                node-controller  Node ha-487903-m03 event: Registered Node ha-487903-m03 in Controller
	  Normal   RegisteredNode           2m27s                node-controller  Node ha-487903-m03 event: Registered Node ha-487903-m03 in Controller
	  Normal   NodeHasNoDiskPressure    2m9s (x8 over 2m9s)  kubelet          Node ha-487903-m03 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m9s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m9s (x8 over 2m9s)  kubelet          Node ha-487903-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m9s (x7 over 2m9s)  kubelet          Node ha-487903-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  2m9s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 117s                 kubelet          Node ha-487903-m03 has been rebooted, boot id: ebee6c5a-099c-4845-b6bc-e5686cb73f0c
	  Normal   RegisteredNode           111s                 node-controller  Node ha-487903-m03 event: Registered Node ha-487903-m03 in Controller
	  Normal   RegisteredNode           21s                  node-controller  Node ha-487903-m03 event: Registered Node ha-487903-m03 in Controller
	
	
	Name:               ha-487903-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-487903-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=ha-487903
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_19T22_51_56_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:51:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-487903-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 23:09:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 23:08:13 +0000   Wed, 19 Nov 2025 23:07:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 23:08:13 +0000   Wed, 19 Nov 2025 23:07:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 23:08:13 +0000   Wed, 19 Nov 2025 23:07:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 23:08:13 +0000   Wed, 19 Nov 2025 23:07:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.187
	  Hostname:    ha-487903-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 2ce148a1b98246f6ada06a5a5b14ddce
	  System UUID:                2ce148a1-b982-46f6-ada0-6a5a5b14ddce
	  Boot ID:                    7878c528-f6af-4234-946e-b1c55c0ff956
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-s9k2l       100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      17m
	  kube-system                 kube-proxy-zxtk6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (1%)  50Mi (1%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 90s                kube-proxy       
	  Normal   Starting                 17m                kube-proxy       
	  Normal   NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     17m (x3 over 17m)  kubelet          Node ha-487903-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    17m (x3 over 17m)  kubelet          Node ha-487903-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  17m (x3 over 17m)  kubelet          Node ha-487903-m04 status is now: NodeHasSufficientMemory
	  Normal   Starting                 17m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           17m                node-controller  Node ha-487903-m04 event: Registered Node ha-487903-m04 in Controller
	  Normal   RegisteredNode           17m                node-controller  Node ha-487903-m04 event: Registered Node ha-487903-m04 in Controller
	  Normal   RegisteredNode           17m                node-controller  Node ha-487903-m04 event: Registered Node ha-487903-m04 in Controller
	  Normal   NodeReady                17m                kubelet          Node ha-487903-m04 status is now: NodeReady
	  Normal   RegisteredNode           14m                node-controller  Node ha-487903-m04 event: Registered Node ha-487903-m04 in Controller
	  Normal   NodeNotReady             13m                node-controller  Node ha-487903-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           2m29s              node-controller  Node ha-487903-m04 event: Registered Node ha-487903-m04 in Controller
	  Normal   RegisteredNode           2m27s              node-controller  Node ha-487903-m04 event: Registered Node ha-487903-m04 in Controller
	  Normal   RegisteredNode           111s               node-controller  Node ha-487903-m04 event: Registered Node ha-487903-m04 in Controller
	  Normal   Starting                 94s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  94s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 94s                kubelet          Node ha-487903-m04 has been rebooted, boot id: 7878c528-f6af-4234-946e-b1c55c0ff956
	  Normal   NodeHasSufficientMemory  94s (x4 over 94s)  kubelet          Node ha-487903-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    94s (x4 over 94s)  kubelet          Node ha-487903-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     94s (x4 over 94s)  kubelet          Node ha-487903-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             94s                kubelet          Node ha-487903-m04 status is now: NodeNotReady
	  Normal   NodeReady                94s (x2 over 94s)  kubelet          Node ha-487903-m04 status is now: NodeReady
	  Normal   RegisteredNode           21s                node-controller  Node ha-487903-m04 event: Registered Node ha-487903-m04 in Controller
	
	
	Name:               ha-487903-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-487903-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=ha-487903
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_19T23_08_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 23:08:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-487903-m05
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 23:09:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 23:09:14 +0000   Wed, 19 Nov 2025 23:08:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 23:09:14 +0000   Wed, 19 Nov 2025 23:08:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 23:09:14 +0000   Wed, 19 Nov 2025 23:08:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 23:09:14 +0000   Wed, 19 Nov 2025 23:09:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.250
	  Hostname:    ha-487903-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 32c4de896a484581875fb6870ed3d42e
	  System UUID:                32c4de89-6a48-4581-875f-b6870ed3d42e
	  Boot ID:                    38df3e3e-0f55-4608-9956-93b000886dcc
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-487903-m05                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         18s
	  kube-system                 kindnet-4pp9j                            100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      19s
	  kube-system                 kube-apiserver-ha-487903-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15s
	  kube-system                 kube-controller-manager-ha-487903-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19s
	  kube-system                 kube-proxy-c5cxl                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19s
	  kube-system                 kube-scheduler-ha-487903-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18s
	  kube-system                 kube-vip-ha-487903-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (5%)  50Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        16s   kube-proxy       
	  Normal  RegisteredNode  19s   node-controller  Node ha-487903-m05 event: Registered Node ha-487903-m05 in Controller
	  Normal  RegisteredNode  17s   node-controller  Node ha-487903-m05 event: Registered Node ha-487903-m05 in Controller
	  Normal  RegisteredNode  16s   node-controller  Node ha-487903-m05 event: Registered Node ha-487903-m05 in Controller
	  Normal  RegisteredNode  16s   node-controller  Node ha-487903-m05 event: Registered Node ha-487903-m05 in Controller
	
	
	==> dmesg <==
	[Nov19 23:05] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[Nov19 23:06] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000639] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.971469] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.112003] kauditd_printk_skb: 93 callbacks suppressed
	[ +23.563071] kauditd_printk_skb: 193 callbacks suppressed
	[  +9.425091] kauditd_printk_skb: 6 callbacks suppressed
	[  +3.746118] kauditd_printk_skb: 281 callbacks suppressed
	[Nov19 23:07] kauditd_printk_skb: 11 callbacks suppressed
	[Nov19 23:08] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [361486fad16d169560efd4b5ba18c9754dd222d05d290599faad7b9f7ef4862e] <==
	{"level":"error","ts":"2025-11-19T23:08:44.091154Z","caller":"etcdserver/server.go:1585","msg":"rejecting promote learner: learner is not ready","learner-ready-percent":0,"ready-percent-threshold":0.9,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).isLearnerReady\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1585\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).mayPromoteMember\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1526\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).promoteMember\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1498\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).PromoteMember\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1450\ngo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*peerMemberPromoteHandler).ServeHTTP\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/peer.go:140\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2747\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:3210\nnet/http.(*conn).serve\n\tnet/http/ser
ver.go:2092"}
	{"level":"warn","ts":"2025-11-19T23:08:44.091244Z","caller":"etcdhttp/peer.go:152","msg":"failed to promote a member","member-id":"bff327183f50f5b5","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2025-11-19T23:08:44.213555Z","caller":"rafthttp/snapshot_sender.go:131","msg":"sent database snapshot","snapshot-index":3364,"remote-peer-id":"bff327183f50f5b5","bytes":5584801,"size":"5.6 MB"}
	{"level":"warn","ts":"2025-11-19T23:08:44.485302Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aadd773bb1fe5a6f","remote-peer-id":"bff327183f50f5b5","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:08:44.496447Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aadd773bb1fe5a6f","remote-peer-id":"bff327183f50f5b5","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:08:44.566259Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"bff327183f50f5b5","error":"failed to write bff327183f50f5b5 on stream MsgApp v2 (write tcp 192.168.39.15:2380->192.168.39.250:54272: write: broken pipe)"}
	{"level":"warn","ts":"2025-11-19T23:08:44.567562Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aadd773bb1fe5a6f","remote-peer-id":"bff327183f50f5b5"}
	{"level":"info","ts":"2025-11-19T23:08:44.587171Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aadd773bb1fe5a6f switched to configuration voters=(83930990489806575 5236735666982451297 12312128054573816431 13831441865679893941)"}
	{"level":"info","ts":"2025-11-19T23:08:44.587459Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"546e0a293cd37a14","local-member-id":"aadd773bb1fe5a6f","promoted-member-id":"bff327183f50f5b5"}
	{"level":"info","ts":"2025-11-19T23:08:44.587540Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aadd773bb1fe5a6f","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"bff327183f50f5b5"}
	{"level":"info","ts":"2025-11-19T23:08:44.698614Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"bff327183f50f5b5"}
	{"level":"info","ts":"2025-11-19T23:08:44.698690Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aadd773bb1fe5a6f","remote-peer-id":"bff327183f50f5b5"}
	{"level":"warn","ts":"2025-11-19T23:08:44.734058Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"bff327183f50f5b5","error":"failed to write bff327183f50f5b5 on stream Message (write tcp 192.168.39.15:2380->192.168.39.250:54284: write: broken pipe)"}
	{"level":"warn","ts":"2025-11-19T23:08:44.734978Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aadd773bb1fe5a6f","remote-peer-id":"bff327183f50f5b5"}
	{"level":"info","ts":"2025-11-19T23:08:44.776897Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aadd773bb1fe5a6f","to":"bff327183f50f5b5","stream-type":"stream Message"}
	{"level":"info","ts":"2025-11-19T23:08:44.776951Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"bff327183f50f5b5"}
	{"level":"info","ts":"2025-11-19T23:08:44.776969Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aadd773bb1fe5a6f","remote-peer-id":"bff327183f50f5b5"}
	{"level":"info","ts":"2025-11-19T23:08:44.779207Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aadd773bb1fe5a6f","remote-peer-id":"bff327183f50f5b5"}
	{"level":"info","ts":"2025-11-19T23:08:44.781055Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aadd773bb1fe5a6f","to":"bff327183f50f5b5","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-11-19T23:08:44.781084Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aadd773bb1fe5a6f","remote-peer-id":"bff327183f50f5b5"}
	{"level":"info","ts":"2025-11-19T23:08:50.523545Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-11-19T23:08:57.744664Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-11-19T23:08:58.669455Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-11-19T23:09:11.890391Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-11-19T23:09:14.213806Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aadd773bb1fe5a6f","to":"bff327183f50f5b5","bytes":5584801,"size":"5.6 MB","took":"30.187406936s"}
	
	
	==> kernel <==
	 23:09:16 up 3 min,  0 users,  load average: 0.41, 0.28, 0.12
	Linux ha-487903 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 21:15:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kindnet [671e74cfb90edb492f69d2062ab55e74926857b9da6b2bd43dc676ca77a34e95] <==
	I1119 23:08:55.551136       1 main.go:297] Handling node with IPs: map[192.168.39.160:{}]
	I1119 23:08:55.551164       1 main.go:324] Node ha-487903-m03 has CIDR [10.244.2.0/24] 
	I1119 23:08:55.551384       1 main.go:297] Handling node with IPs: map[192.168.39.187:{}]
	I1119 23:08:55.551391       1 main.go:324] Node ha-487903-m04 has CIDR [10.244.3.0/24] 
	I1119 23:09:05.550320       1 main.go:297] Handling node with IPs: map[192.168.39.160:{}]
	I1119 23:09:05.550468       1 main.go:324] Node ha-487903-m03 has CIDR [10.244.2.0/24] 
	I1119 23:09:05.551072       1 main.go:297] Handling node with IPs: map[192.168.39.187:{}]
	I1119 23:09:05.551088       1 main.go:324] Node ha-487903-m04 has CIDR [10.244.3.0/24] 
	I1119 23:09:05.551228       1 main.go:297] Handling node with IPs: map[192.168.39.250:{}]
	I1119 23:09:05.551234       1 main.go:324] Node ha-487903-m05 has CIDR [10.244.4.0/24] 
	I1119 23:09:05.551333       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.4.0/24 Src: <nil> Gw: 192.168.39.250 Flags: [] Table: 0 Realm: 0} 
	I1119 23:09:05.552492       1 main.go:297] Handling node with IPs: map[192.168.39.15:{}]
	I1119 23:09:05.552507       1 main.go:301] handling current node
	I1119 23:09:05.552583       1 main.go:297] Handling node with IPs: map[192.168.39.191:{}]
	I1119 23:09:05.552588       1 main.go:324] Node ha-487903-m02 has CIDR [10.244.1.0/24] 
	I1119 23:09:15.550480       1 main.go:297] Handling node with IPs: map[192.168.39.191:{}]
	I1119 23:09:15.550540       1 main.go:324] Node ha-487903-m02 has CIDR [10.244.1.0/24] 
	I1119 23:09:15.551055       1 main.go:297] Handling node with IPs: map[192.168.39.160:{}]
	I1119 23:09:15.551067       1 main.go:324] Node ha-487903-m03 has CIDR [10.244.2.0/24] 
	I1119 23:09:15.551996       1 main.go:297] Handling node with IPs: map[192.168.39.187:{}]
	I1119 23:09:15.552036       1 main.go:324] Node ha-487903-m04 has CIDR [10.244.3.0/24] 
	I1119 23:09:15.552376       1 main.go:297] Handling node with IPs: map[192.168.39.250:{}]
	I1119 23:09:15.552388       1 main.go:324] Node ha-487903-m05 has CIDR [10.244.4.0/24] 
	I1119 23:09:15.552999       1 main.go:297] Handling node with IPs: map[192.168.39.15:{}]
	I1119 23:09:15.553029       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0a3ebfa791420fd99d627281216572fc41635c59a2e823681f10e0265a5601af] <==
	I1119 23:06:41.578069       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1119 23:06:41.578813       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1119 23:06:41.578895       1 policy_source.go:240] refreshing policies
	I1119 23:06:41.608674       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 23:06:41.652233       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1119 23:06:41.655394       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1119 23:06:41.655821       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1119 23:06:41.655850       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1119 23:06:41.656332       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1119 23:06:41.656371       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 23:06:41.656393       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 23:06:41.661422       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 23:06:41.661498       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1119 23:06:41.661574       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 23:06:41.678861       1 cache.go:39] Caches are synced for autoregister controller
	W1119 23:06:41.766283       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.191]
	I1119 23:06:41.770787       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 23:06:41.846071       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1119 23:06:41.851314       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1119 23:06:42.378977       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 23:06:42.473024       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1119 23:06:45.193304       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.15 192.168.39.191]
	I1119 23:06:47.599355       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 23:06:47.956548       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 23:06:50.470006       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-apiserver [37548c727f81afd8dd40f5cf29964a61c4e4370cbb603376f96287d018620cb4] <==
	I1119 23:06:11.880236       1 server.go:150] Version: v1.34.1
	I1119 23:06:11.880286       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1119 23:06:12.813039       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1119 23:06:12.813073       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1119 23:06:12.813086       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1119 23:06:12.813090       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1119 23:06:12.813094       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1119 23:06:12.813097       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1119 23:06:12.813101       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1119 23:06:12.813104       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1119 23:06:12.813108       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1119 23:06:12.813111       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1119 23:06:12.813114       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1119 23:06:12.813118       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1119 23:06:12.905211       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1119 23:06:12.913843       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1119 23:06:12.920093       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1119 23:06:12.966564       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1119 23:06:12.985714       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1119 23:06:12.985841       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1119 23:06:12.986449       1 instance.go:239] Using reconciler: lease
	W1119 23:06:12.991441       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1119 23:06:32.899983       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1119 23:06:32.912361       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1119 23:06:32.990473       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [407c1906949db307229dfea169cf035af7ee02aa72d4a48566dbe3b8c43189a6] <==
	I1119 23:06:47.649012       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 23:06:47.649925       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 23:06:47.650061       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 23:06:47.652900       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1119 23:06:47.653973       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 23:06:47.654043       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 23:06:47.654066       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 23:06:47.655286       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 23:06:47.658251       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 23:06:47.661337       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 23:06:47.661495       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 23:06:47.665057       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 23:06:47.668198       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 23:06:47.718631       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-487903-m04"
	I1119 23:06:47.722547       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-487903"
	I1119 23:06:47.722625       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-487903-m02"
	I1119 23:06:47.722698       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-487903-m03"
	I1119 23:06:47.725022       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1119 23:07:42.933678       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-487903-m04"
	E1119 23:08:56.919856       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-6g7fs failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-6g7fs\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1119 23:08:57.480111       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-487903-m04"
	I1119 23:08:57.481547       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-487903-m05\" does not exist"
	I1119 23:08:57.506476       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-487903-m05" podCIDRs=["10.244.4.0/24"]
	I1119 23:08:57.771217       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-487903-m05"
	I1119 23:09:14.090514       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-487903-m04"
	
	
	==> kube-controller-manager [b7d9fc5b2567d02f5794d850ec1778030b3378c830c952d472803d0582d5904b] <==
	I1119 23:06:13.347369       1 serving.go:386] Generated self-signed cert in-memory
	I1119 23:06:14.236064       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1119 23:06:14.236118       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 23:06:14.241243       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1119 23:06:14.241453       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1119 23:06:14.242515       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1119 23:06:14.242958       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1119 23:06:41.727088       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[-]poststarthook/bootstrap-controller failed: reason withheld\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the reques
t from succeeding"
	
	
	==> kube-proxy [8e1ce69b078fd9d69a966ab3ae75b0a9a20f1716235f7fd123429ac0fe73bb0b] <==
	I1119 23:06:45.377032       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 23:06:45.478419       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 23:06:45.478668       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.15"]
	E1119 23:06:45.478924       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 23:06:45.554663       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1119 23:06:45.554766       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1119 23:06:45.554814       1 server_linux.go:132] "Using iptables Proxier"
	I1119 23:06:45.584249       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 23:06:45.586108       1 server.go:527] "Version info" version="v1.34.1"
	I1119 23:06:45.586390       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 23:06:45.595385       1 config.go:200] "Starting service config controller"
	I1119 23:06:45.595503       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 23:06:45.595536       1 config.go:106] "Starting endpoint slice config controller"
	I1119 23:06:45.595628       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 23:06:45.595660       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 23:06:45.595795       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 23:06:45.601653       1 config.go:309] "Starting node config controller"
	I1119 23:06:45.601683       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 23:06:45.601692       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 23:06:45.697008       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 23:06:45.701060       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 23:06:45.701074       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [fead33c061a4deb0b1eb4ee9dd3e9e724dade2871a97a7aad79bef05acbd4a07] <==
	I1119 23:06:14.220668       1 serving.go:386] Generated self-signed cert in-memory
	W1119 23:06:24.867573       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.39.15:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W1119 23:06:24.867603       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1119 23:06:24.867609       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1119 23:06:41.527454       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 23:06:41.527518       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 23:06:41.550229       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 23:06:41.550314       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 23:06:41.551802       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 23:06:41.551954       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 23:06:41.651239       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1119 23:08:57.671401       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-xv686\": pod kube-proxy-xv686 is already assigned to node \"ha-487903-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-xv686" node="ha-487903-m05"
	E1119 23:08:57.671692       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-xv686\": pod kube-proxy-xv686 is already assigned to node \"ha-487903-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-xv686"
	E1119 23:08:57.705927       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-nxv8w\": pod kube-proxy-nxv8w is already assigned to node \"ha-487903-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-nxv8w" node="ha-487903-m05"
	E1119 23:08:57.706059       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-nxv8w\": pod kube-proxy-nxv8w is already assigned to node \"ha-487903-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-nxv8w"
	E1119 23:08:57.705956       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-sj6n2\": pod kindnet-sj6n2 is already assigned to node \"ha-487903-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-sj6n2" node="ha-487903-m05"
	E1119 23:08:57.706700       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod df6b69ba-9090-4aed-aae7-68b8b959288d(kube-system/kindnet-sj6n2) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-sj6n2"
	E1119 23:08:57.707817       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-sj6n2\": pod kindnet-sj6n2 is already assigned to node \"ha-487903-m05\"" logger="UnhandledError" pod="kube-system/kindnet-sj6n2"
	I1119 23:08:57.707882       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-sj6n2" node="ha-487903-m05"
	
	
	==> kubelet <==
	Nov 19 23:07:16 ha-487903 kubelet[1174]: I1119 23:07:16.059285    1174 scope.go:117] "RemoveContainer" containerID="f4db302f8e1d90f14b4a4931bf9263114ec245c8993010f4eed63ba0b2ff9d17"
	Nov 19 23:07:20 ha-487903 kubelet[1174]: E1119 23:07:20.435116    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593640433962521  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:20 ha-487903 kubelet[1174]: E1119 23:07:20.435143    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593640433962521  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:30 ha-487903 kubelet[1174]: E1119 23:07:30.443024    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593650441380322  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:30 ha-487903 kubelet[1174]: E1119 23:07:30.443098    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593650441380322  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:40 ha-487903 kubelet[1174]: E1119 23:07:40.446103    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593660445234723  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:40 ha-487903 kubelet[1174]: E1119 23:07:40.446443    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593660445234723  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:50 ha-487903 kubelet[1174]: E1119 23:07:50.450547    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593670449149802  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:50 ha-487903 kubelet[1174]: E1119 23:07:50.450679    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593670449149802  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:08:00 ha-487903 kubelet[1174]: E1119 23:08:00.452642    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593680452197882  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:08:00 ha-487903 kubelet[1174]: E1119 23:08:00.452706    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593680452197882  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:08:10 ha-487903 kubelet[1174]: E1119 23:08:10.456047    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593690454418478  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:08:10 ha-487903 kubelet[1174]: E1119 23:08:10.456177    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593690454418478  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:08:20 ha-487903 kubelet[1174]: E1119 23:08:20.458413    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593700457879325  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:08:20 ha-487903 kubelet[1174]: E1119 23:08:20.458447    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593700457879325  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:08:30 ha-487903 kubelet[1174]: E1119 23:08:30.460710    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593710460384024  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:08:30 ha-487903 kubelet[1174]: E1119 23:08:30.460817    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593710460384024  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:08:40 ha-487903 kubelet[1174]: E1119 23:08:40.469974    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593720465271267  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:08:40 ha-487903 kubelet[1174]: E1119 23:08:40.470287    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593720465271267  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:08:50 ha-487903 kubelet[1174]: E1119 23:08:50.471932    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593730471534019  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:08:50 ha-487903 kubelet[1174]: E1119 23:08:50.471982    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593730471534019  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:09:00 ha-487903 kubelet[1174]: E1119 23:09:00.477445    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593740474843600  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:09:00 ha-487903 kubelet[1174]: E1119 23:09:00.478339    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593740474843600  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:09:10 ha-487903 kubelet[1174]: E1119 23:09:10.481303    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593750480821927  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:09:10 ha-487903 kubelet[1174]: E1119 23:09:10.481376    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593750480821927  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-487903 -n ha-487903
helpers_test.go:269: (dbg) Run:  kubectl --context ha-487903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (81.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (3.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:305: expected profile "ha-487903" in json of 'profile list' to include 4 nodes but have 5 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-487903\",\"Status\":\"HAppy\",\"Config\":{\"Name\":\"ha-487903\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\
":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-487903\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.15\",\"Port\":8443,\"KubernetesVersion\":\"v
1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.191\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.160\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.187\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true},{\"Name\":\"m05\",\"IP\":\"192.168.39.250\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\
":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":9460800000000
0000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-487903 -n ha-487903
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-487903 logs -n 25: (1.830265567s)
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
	โ”‚ COMMAND โ”‚                                                                ARGS                                                                 โ”‚  PROFILE  โ”‚  USER   โ”‚ VERSION โ”‚     START TIME      โ”‚      END TIME       โ”‚
	โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m03 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test_ha-487903-m03_ha-487903-m04.txt                                        โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp testdata/cp-test.txt ha-487903-m04:/home/docker/cp-test.txt                                                            โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile651617511/001/cp-test_ha-487903-m04.txt โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m04:/home/docker/cp-test.txt ha-487903:/home/docker/cp-test_ha-487903-m04_ha-487903.txt                      โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903 sudo cat /home/docker/cp-test_ha-487903-m04_ha-487903.txt                                                โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m04:/home/docker/cp-test.txt ha-487903-m02:/home/docker/cp-test_ha-487903-m04_ha-487903-m02.txt              โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m02 sudo cat /home/docker/cp-test_ha-487903-m04_ha-487903-m02.txt                                        โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ cp      โ”‚ ha-487903 cp ha-487903-m04:/home/docker/cp-test.txt ha-487903-m03:/home/docker/cp-test_ha-487903-m04_ha-487903-m03.txt              โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m04 sudo cat /home/docker/cp-test.txt                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ ssh     โ”‚ ha-487903 ssh -n ha-487903-m03 sudo cat /home/docker/cp-test_ha-487903-m04_ha-487903-m03.txt                                        โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:52 UTC โ”‚
	โ”‚ node    โ”‚ ha-487903 node stop m02 --alsologtostderr -v 5                                                                                      โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:52 UTC โ”‚ 19 Nov 25 22:53 UTC โ”‚
	โ”‚ node    โ”‚ ha-487903 node start m02 --alsologtostderr -v 5                                                                                     โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:53 UTC โ”‚ 19 Nov 25 22:54 UTC โ”‚
	โ”‚ node    โ”‚ ha-487903 node list --alsologtostderr -v 5                                                                                          โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:54 UTC โ”‚                     โ”‚
	โ”‚ stop    โ”‚ ha-487903 stop --alsologtostderr -v 5                                                                                               โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:54 UTC โ”‚ 19 Nov 25 22:58 UTC โ”‚
	โ”‚ start   โ”‚ ha-487903 start --wait true --alsologtostderr -v 5                                                                                  โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 22:58 UTC โ”‚                     โ”‚
	โ”‚ node    โ”‚ ha-487903 node list --alsologtostderr -v 5                                                                                          โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:05 UTC โ”‚                     โ”‚
	โ”‚ node    โ”‚ ha-487903 node delete m03 --alsologtostderr -v 5                                                                                    โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:05 UTC โ”‚                     โ”‚
	โ”‚ stop    โ”‚ ha-487903 stop --alsologtostderr -v 5                                                                                               โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:05 UTC โ”‚ 19 Nov 25 23:05 UTC โ”‚
	โ”‚ start   โ”‚ ha-487903 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio                                          โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:05 UTC โ”‚ 19 Nov 25 23:07 UTC โ”‚
	โ”‚ node    โ”‚ ha-487903 node add --control-plane --alsologtostderr -v 5                                                                           โ”‚ ha-487903 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:07 UTC โ”‚ 19 Nov 25 23:09 UTC โ”‚
	โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 23:05:52
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 23:05:52.706176  142733 out.go:360] Setting OutFile to fd 1 ...
	I1119 23:05:52.706327  142733 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:05:52.706339  142733 out.go:374] Setting ErrFile to fd 2...
	I1119 23:05:52.706345  142733 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:05:52.706585  142733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-117497/.minikube/bin
	I1119 23:05:52.707065  142733 out.go:368] Setting JSON to false
	I1119 23:05:52.708054  142733 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":17300,"bootTime":1763576253,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 23:05:52.708149  142733 start.go:143] virtualization: kvm guest
	I1119 23:05:52.710481  142733 out.go:179] * [ha-487903] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 23:05:52.712209  142733 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 23:05:52.712212  142733 notify.go:221] Checking for updates...
	I1119 23:05:52.713784  142733 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 23:05:52.715651  142733 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 23:05:52.717169  142733 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-117497/.minikube
	I1119 23:05:52.718570  142733 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 23:05:52.719907  142733 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 23:05:52.721783  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:05:52.722291  142733 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 23:05:52.757619  142733 out.go:179] * Using the kvm2 driver based on existing profile
	I1119 23:05:52.759046  142733 start.go:309] selected driver: kvm2
	I1119 23:05:52.759059  142733 start.go:930] validating driver "kvm2" against &{Name:ha-487903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:fal
se default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 23:05:52.759205  142733 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 23:05:52.760143  142733 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 23:05:52.760174  142733 cni.go:84] Creating CNI manager for ""
	I1119 23:05:52.760222  142733 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1119 23:05:52.760262  142733 start.go:353] cluster config:
	{Name:ha-487903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:f
alse headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 23:05:52.760375  142733 iso.go:125] acquiring lock: {Name:mk95e5a645bfae75190ef550e02bd4f48b331040 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 23:05:52.762211  142733 out.go:179] * Starting "ha-487903" primary control-plane node in "ha-487903" cluster
	I1119 23:05:52.763538  142733 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:05:52.763567  142733 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 23:05:52.763575  142733 cache.go:65] Caching tarball of preloaded images
	I1119 23:05:52.763673  142733 preload.go:238] Found /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 23:05:52.763683  142733 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 23:05:52.763787  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:05:52.763996  142733 start.go:360] acquireMachinesLock for ha-487903: {Name:mk31ae761a53f3900bcf08a8c89eb1fc69e23fe3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1119 23:05:52.764045  142733 start.go:364] duration metric: took 30.713ยตs to acquireMachinesLock for "ha-487903"
	I1119 23:05:52.764058  142733 start.go:96] Skipping create...Using existing machine configuration
	I1119 23:05:52.764066  142733 fix.go:54] fixHost starting: 
	I1119 23:05:52.765697  142733 fix.go:112] recreateIfNeeded on ha-487903: state=Stopped err=<nil>
	W1119 23:05:52.765728  142733 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 23:05:52.767327  142733 out.go:252] * Restarting existing kvm2 VM for "ha-487903" ...
	I1119 23:05:52.767364  142733 main.go:143] libmachine: starting domain...
	I1119 23:05:52.767374  142733 main.go:143] libmachine: ensuring networks are active...
	I1119 23:05:52.768372  142733 main.go:143] libmachine: Ensuring network default is active
	I1119 23:05:52.768788  142733 main.go:143] libmachine: Ensuring network mk-ha-487903 is active
	I1119 23:05:52.769282  142733 main.go:143] libmachine: getting domain XML...
	I1119 23:05:52.770421  142733 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>ha-487903</name>
	  <uuid>a1ad91e9-9cee-4f2a-89ce-da034e4410c0</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/ha-487903.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:a9:81:53'/>
	      <source network='mk-ha-487903'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:93:d5:3e'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1119 23:05:54.042651  142733 main.go:143] libmachine: waiting for domain to start...
	I1119 23:05:54.044244  142733 main.go:143] libmachine: domain is now running
	I1119 23:05:54.044267  142733 main.go:143] libmachine: waiting for IP...
	I1119 23:05:54.045198  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:05:54.045704  142733 main.go:143] libmachine: domain ha-487903 has current primary IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:05:54.045724  142733 main.go:143] libmachine: found domain IP: 192.168.39.15
	I1119 23:05:54.045732  142733 main.go:143] libmachine: reserving static IP address...
	I1119 23:05:54.046222  142733 main.go:143] libmachine: found host DHCP lease matching {name: "ha-487903", mac: "52:54:00:a9:81:53", ip: "192.168.39.15"} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:05:54.046258  142733 main.go:143] libmachine: skip adding static IP to network mk-ha-487903 - found existing host DHCP lease matching {name: "ha-487903", mac: "52:54:00:a9:81:53", ip: "192.168.39.15"}
	I1119 23:05:54.046271  142733 main.go:143] libmachine: reserved static IP address 192.168.39.15 for domain ha-487903
	I1119 23:05:54.046295  142733 main.go:143] libmachine: waiting for SSH...
	I1119 23:05:54.046303  142733 main.go:143] libmachine: Getting to WaitForSSH function...
	I1119 23:05:54.048860  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:05:54.049341  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:08 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:05:54.049374  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:05:54.049568  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:05:54.049870  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 23:05:54.049901  142733 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1119 23:05:57.100181  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.15:22: connect: no route to host
	I1119 23:06:03.180312  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.15:22: connect: no route to host
	I1119 23:06:06.296535  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:06:06.299953  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.300441  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:06.300473  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.300784  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:06:06.301022  142733 machine.go:94] provisionDockerMachine start ...
	I1119 23:06:06.303559  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.303988  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:06.304019  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.304170  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:06.304355  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 23:06:06.304365  142733 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 23:06:06.427246  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1119 23:06:06.427299  142733 buildroot.go:166] provisioning hostname "ha-487903"
	I1119 23:06:06.430382  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.430835  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:06.430864  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.431166  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:06.431461  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 23:06:06.431480  142733 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-487903 && echo "ha-487903" | sudo tee /etc/hostname
	I1119 23:06:06.561698  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-487903
	
	I1119 23:06:06.564714  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.565207  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:06.565235  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.565469  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:06.565702  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 23:06:06.565719  142733 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-487903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-487903/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-487903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 23:06:06.681480  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:06:06.681508  142733 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21918-117497/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-117497/.minikube}
	I1119 23:06:06.681543  142733 buildroot.go:174] setting up certificates
	I1119 23:06:06.681552  142733 provision.go:84] configureAuth start
	I1119 23:06:06.685338  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.685816  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:06.685842  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.688699  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.689140  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:06.689164  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:06.689319  142733 provision.go:143] copyHostCerts
	I1119 23:06:06.689357  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:06:06.689414  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem, removing ...
	I1119 23:06:06.689445  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:06:06.689527  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem (1078 bytes)
	I1119 23:06:06.689624  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:06:06.689643  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem, removing ...
	I1119 23:06:06.689649  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:06:06.689677  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem (1123 bytes)
	I1119 23:06:06.689736  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:06:06.689753  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem, removing ...
	I1119 23:06:06.689759  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:06:06.689781  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem (1675 bytes)
	I1119 23:06:06.689843  142733 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem org=jenkins.ha-487903 san=[127.0.0.1 192.168.39.15 ha-487903 localhost minikube]
	I1119 23:06:07.018507  142733 provision.go:177] copyRemoteCerts
	I1119 23:06:07.018578  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 23:06:07.021615  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.022141  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:07.022166  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.022358  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:06:07.124817  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1119 23:06:07.124927  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 23:06:07.158158  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1119 23:06:07.158263  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1119 23:06:07.190088  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1119 23:06:07.190169  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 23:06:07.222689  142733 provision.go:87] duration metric: took 541.123395ms to configureAuth
	I1119 23:06:07.222718  142733 buildroot.go:189] setting minikube options for container-runtime
	I1119 23:06:07.222970  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:07.226056  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.226580  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:07.226611  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.226826  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:07.227127  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 23:06:07.227155  142733 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 23:06:07.467444  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 23:06:07.467474  142733 machine.go:97] duration metric: took 1.166437022s to provisionDockerMachine
	I1119 23:06:07.467487  142733 start.go:293] postStartSetup for "ha-487903" (driver="kvm2")
	I1119 23:06:07.467497  142733 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 23:06:07.467573  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 23:06:07.470835  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.471406  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:07.471439  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.471649  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:06:07.557470  142733 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 23:06:07.562862  142733 info.go:137] Remote host: Buildroot 2025.02
	I1119 23:06:07.562927  142733 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/addons for local assets ...
	I1119 23:06:07.563034  142733 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/files for local assets ...
	I1119 23:06:07.563138  142733 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> 1213692.pem in /etc/ssl/certs
	I1119 23:06:07.563154  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /etc/ssl/certs/1213692.pem
	I1119 23:06:07.563287  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 23:06:07.576076  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:06:07.609515  142733 start.go:296] duration metric: took 142.008328ms for postStartSetup
	I1119 23:06:07.609630  142733 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1119 23:06:07.612430  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.612824  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:07.612846  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.613026  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:06:07.696390  142733 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I1119 23:06:07.696457  142733 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1119 23:06:07.760325  142733 fix.go:56] duration metric: took 14.99624586s for fixHost
	I1119 23:06:07.763696  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.764319  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:07.764358  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.764614  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:07.764948  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.15 22 <nil> <nil>}
	I1119 23:06:07.764966  142733 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1119 23:06:07.879861  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763593567.838594342
	
	I1119 23:06:07.879914  142733 fix.go:216] guest clock: 1763593567.838594342
	I1119 23:06:07.879939  142733 fix.go:229] Guest: 2025-11-19 23:06:07.838594342 +0000 UTC Remote: 2025-11-19 23:06:07.760362222 +0000 UTC m=+15.104606371 (delta=78.23212ms)
	I1119 23:06:07.879965  142733 fix.go:200] guest clock delta is within tolerance: 78.23212ms
	I1119 23:06:07.879974  142733 start.go:83] releasing machines lock for "ha-487903", held for 15.115918319s
	I1119 23:06:07.882904  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.883336  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:07.883370  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.883966  142733 ssh_runner.go:195] Run: cat /version.json
	I1119 23:06:07.884051  142733 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 23:06:07.887096  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.887222  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.887583  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:07.887617  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.887792  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:07.887817  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:07.887816  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:06:07.888042  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:06:08.000713  142733 ssh_runner.go:195] Run: systemctl --version
	I1119 23:06:08.008530  142733 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 23:06:08.160324  142733 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 23:06:08.168067  142733 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 23:06:08.168152  142733 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 23:06:08.191266  142733 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 23:06:08.191300  142733 start.go:496] detecting cgroup driver to use...
	I1119 23:06:08.191379  142733 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 23:06:08.213137  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 23:06:08.230996  142733 docker.go:218] disabling cri-docker service (if available) ...
	I1119 23:06:08.231095  142733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 23:06:08.249013  142733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 23:06:08.265981  142733 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 23:06:08.414758  142733 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 23:06:08.622121  142733 docker.go:234] disabling docker service ...
	I1119 23:06:08.622209  142733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 23:06:08.639636  142733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 23:06:08.655102  142733 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 23:06:08.816483  142733 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 23:06:08.968104  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 23:06:08.984576  142733 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 23:06:09.008691  142733 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 23:06:09.008781  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:09.022146  142733 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 23:06:09.022232  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:09.035596  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:09.049670  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:09.063126  142733 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 23:06:09.077541  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:09.091115  142733 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:09.112968  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:09.126168  142733 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 23:06:09.137702  142733 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1119 23:06:09.137765  142733 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1119 23:06:09.176751  142733 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 23:06:09.191238  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:06:09.335526  142733 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 23:06:09.473011  142733 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 23:06:09.473116  142733 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 23:06:09.479113  142733 start.go:564] Will wait 60s for crictl version
	I1119 23:06:09.479189  142733 ssh_runner.go:195] Run: which crictl
	I1119 23:06:09.483647  142733 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1119 23:06:09.528056  142733 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1119 23:06:09.528131  142733 ssh_runner.go:195] Run: crio --version
	I1119 23:06:09.559995  142733 ssh_runner.go:195] Run: crio --version
	I1119 23:06:09.592672  142733 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1119 23:06:09.597124  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:09.597564  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:09.597590  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:09.597778  142733 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1119 23:06:09.602913  142733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:06:09.620048  142733 kubeadm.go:884] updating cluster {Name:ha-487903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stor
ageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 23:06:09.620196  142733 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:06:09.620243  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:06:09.674254  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:06:09.674279  142733 crio.go:433] Images already preloaded, skipping extraction
	I1119 23:06:09.674328  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:06:09.712016  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:06:09.712041  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:06:09.712058  142733 kubeadm.go:935] updating node { 192.168.39.15 8443 v1.34.1 crio true true} ...
	I1119 23:06:09.712184  142733 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-487903 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 23:06:09.712274  142733 ssh_runner.go:195] Run: crio config
	I1119 23:06:09.768708  142733 cni.go:84] Creating CNI manager for ""
	I1119 23:06:09.768732  142733 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1119 23:06:09.768752  142733 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 23:06:09.768773  142733 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.15 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-487903 NodeName:ha-487903 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 23:06:09.768939  142733 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-487903"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.15"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.15"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 23:06:09.768965  142733 kube-vip.go:115] generating kube-vip config ...
	I1119 23:06:09.769018  142733 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1119 23:06:09.795571  142733 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1119 23:06:09.795712  142733 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1119 23:06:09.795795  142733 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 23:06:09.812915  142733 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 23:06:09.812990  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1119 23:06:09.827102  142733 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1119 23:06:09.850609  142733 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 23:06:09.873695  142733 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1119 23:06:09.898415  142733 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1119 23:06:09.921905  142733 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1119 23:06:09.927238  142733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:06:09.944650  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:06:10.092858  142733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:06:10.131346  142733 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903 for IP: 192.168.39.15
	I1119 23:06:10.131374  142733 certs.go:195] generating shared ca certs ...
	I1119 23:06:10.131396  142733 certs.go:227] acquiring lock for ca certs: {Name:mk7fd1f1adfef6333505c39c1a982465562c820e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:06:10.131585  142733 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key
	I1119 23:06:10.131628  142733 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key
	I1119 23:06:10.131638  142733 certs.go:257] generating profile certs ...
	I1119 23:06:10.131709  142733 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key
	I1119 23:06:10.131766  142733 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key.4e6b2c30
	I1119 23:06:10.131799  142733 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key
	I1119 23:06:10.131811  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1119 23:06:10.131823  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1119 23:06:10.131835  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1119 23:06:10.131844  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1119 23:06:10.131857  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1119 23:06:10.131867  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1119 23:06:10.131905  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1119 23:06:10.131923  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1119 23:06:10.131976  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem (1338 bytes)
	W1119 23:06:10.132017  142733 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369_empty.pem, impossibly tiny 0 bytes
	I1119 23:06:10.132030  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 23:06:10.132063  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem (1078 bytes)
	I1119 23:06:10.132120  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem (1123 bytes)
	I1119 23:06:10.132148  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem (1675 bytes)
	I1119 23:06:10.132194  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:06:10.132221  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:06:10.132233  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem -> /usr/share/ca-certificates/121369.pem
	I1119 23:06:10.132244  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /usr/share/ca-certificates/1213692.pem
	I1119 23:06:10.132912  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 23:06:10.173830  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 23:06:10.215892  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 23:06:10.259103  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 23:06:10.294759  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 23:06:10.334934  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 23:06:10.388220  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 23:06:10.446365  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 23:06:10.481746  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 23:06:10.514956  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem --> /usr/share/ca-certificates/121369.pem (1338 bytes)
	I1119 23:06:10.547594  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /usr/share/ca-certificates/1213692.pem (1708 bytes)
	I1119 23:06:10.595613  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 23:06:10.619484  142733 ssh_runner.go:195] Run: openssl version
	I1119 23:06:10.626921  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121369.pem && ln -fs /usr/share/ca-certificates/121369.pem /etc/ssl/certs/121369.pem"
	I1119 23:06:10.641703  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121369.pem
	I1119 23:06:10.647634  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:54 /usr/share/ca-certificates/121369.pem
	I1119 23:06:10.647703  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121369.pem
	I1119 23:06:10.655724  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/121369.pem /etc/ssl/certs/51391683.0"
	I1119 23:06:10.670575  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1213692.pem && ln -fs /usr/share/ca-certificates/1213692.pem /etc/ssl/certs/1213692.pem"
	I1119 23:06:10.684630  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1213692.pem
	I1119 23:06:10.690618  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:54 /usr/share/ca-certificates/1213692.pem
	I1119 23:06:10.690694  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1213692.pem
	I1119 23:06:10.698531  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1213692.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 23:06:10.713731  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 23:06:10.729275  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:06:10.735204  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:06:10.735297  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:06:10.744718  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 23:06:10.760092  142733 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 23:06:10.765798  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 23:06:10.773791  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 23:06:10.781675  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 23:06:10.789835  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 23:06:10.797921  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 23:06:10.806330  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 23:06:10.814663  142733 kubeadm.go:401] StartCluster: {Name:ha-487903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Clust
erName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 23:06:10.814784  142733 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 23:06:10.814836  142733 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 23:06:10.862721  142733 cri.go:89] found id: ""
	I1119 23:06:10.862820  142733 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 23:06:10.906379  142733 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 23:06:10.906398  142733 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 23:06:10.906444  142733 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 23:06:10.937932  142733 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 23:06:10.938371  142733 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-487903" does not appear in /home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 23:06:10.938511  142733 kubeconfig.go:62] /home/jenkins/minikube-integration/21918-117497/kubeconfig needs updating (will repair): [kubeconfig missing "ha-487903" cluster setting kubeconfig missing "ha-487903" context setting]
	I1119 23:06:10.938761  142733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/kubeconfig: {Name:mk079b5c589536bd895a38d6eaf0adbccf891fc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:06:10.939284  142733 kapi.go:59] client config for ha-487903: &rest.Config{Host:"https://192.168.39.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key", CAFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1119 23:06:10.939703  142733 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1119 23:06:10.939720  142733 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1119 23:06:10.939727  142733 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1119 23:06:10.939732  142733 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1119 23:06:10.939737  142733 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1119 23:06:10.939800  142733 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1119 23:06:10.940217  142733 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 23:06:10.970469  142733 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.15
	I1119 23:06:10.970501  142733 kubeadm.go:602] duration metric: took 64.095819ms to restartPrimaryControlPlane
	I1119 23:06:10.970515  142733 kubeadm.go:403] duration metric: took 155.861263ms to StartCluster
	I1119 23:06:10.970538  142733 settings.go:142] acquiring lock: {Name:mk7bf46f049c1d627501587bc2954f8687f12cc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:06:10.970645  142733 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 23:06:10.971536  142733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/kubeconfig: {Name:mk079b5c589536bd895a38d6eaf0adbccf891fc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:06:10.971861  142733 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.15 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 23:06:10.971912  142733 start.go:242] waiting for startup goroutines ...
	I1119 23:06:10.971934  142733 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 23:06:10.972157  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:10.972266  142733 cache.go:107] acquiring lock: {Name:mk7073b8eca670d2c11dece9275947688dc7c859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 23:06:10.972332  142733 cache.go:115] /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1119 23:06:10.972347  142733 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 95.206ยตs
	I1119 23:06:10.972358  142733 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1119 23:06:10.972373  142733 cache.go:87] Successfully saved all images to host disk.
	I1119 23:06:10.972588  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:10.974762  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:06:10.975000  142733 out.go:179] * Enabled addons: 
	I1119 23:06:10.976397  142733 addons.go:515] duration metric: took 4.466316ms for enable addons: enabled=[]
	I1119 23:06:10.977405  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:10.977866  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:10.977902  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:10.978075  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:06:11.174757  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:06:11.174779  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:06:11.179357  142733 cache_images.go:264] succeeded pushing to: ha-487903
	I1119 23:06:11.179394  142733 start.go:247] waiting for cluster config update ...
	I1119 23:06:11.179405  142733 start.go:256] writing updated cluster config ...
	I1119 23:06:11.181383  142733 out.go:203] 
	I1119 23:06:11.182846  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:11.182976  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:06:11.184565  142733 out.go:179] * Starting "ha-487903-m02" control-plane node in "ha-487903" cluster
	I1119 23:06:11.185697  142733 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:06:11.185715  142733 cache.go:65] Caching tarball of preloaded images
	I1119 23:06:11.185830  142733 preload.go:238] Found /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 23:06:11.185845  142733 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 23:06:11.185991  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:06:11.186234  142733 start.go:360] acquireMachinesLock for ha-487903-m02: {Name:mk31ae761a53f3900bcf08a8c89eb1fc69e23fe3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1119 23:06:11.186285  142733 start.go:364] duration metric: took 28.134ยตs to acquireMachinesLock for "ha-487903-m02"
	I1119 23:06:11.186301  142733 start.go:96] Skipping create...Using existing machine configuration
	I1119 23:06:11.186314  142733 fix.go:54] fixHost starting: m02
	I1119 23:06:11.187948  142733 fix.go:112] recreateIfNeeded on ha-487903-m02: state=Stopped err=<nil>
	W1119 23:06:11.187969  142733 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 23:06:11.189608  142733 out.go:252] * Restarting existing kvm2 VM for "ha-487903-m02" ...
	I1119 23:06:11.189647  142733 main.go:143] libmachine: starting domain...
	I1119 23:06:11.189655  142733 main.go:143] libmachine: ensuring networks are active...
	I1119 23:06:11.190534  142733 main.go:143] libmachine: Ensuring network default is active
	I1119 23:06:11.190964  142733 main.go:143] libmachine: Ensuring network mk-ha-487903 is active
	I1119 23:06:11.191485  142733 main.go:143] libmachine: getting domain XML...
	I1119 23:06:11.192659  142733 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>ha-487903-m02</name>
	  <uuid>dcc51fc7-a2ff-40ae-988d-da36299d6bbc</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/ha-487903-m02.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:04:d5:70'/>
	      <source network='mk-ha-487903'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:9b:1d:f0'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1119 23:06:12.559560  142733 main.go:143] libmachine: waiting for domain to start...
	I1119 23:06:12.561198  142733 main.go:143] libmachine: domain is now running
	I1119 23:06:12.561220  142733 main.go:143] libmachine: waiting for IP...
	I1119 23:06:12.562111  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:12.562699  142733 main.go:143] libmachine: domain ha-487903-m02 has current primary IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:12.562715  142733 main.go:143] libmachine: found domain IP: 192.168.39.191
	I1119 23:06:12.562721  142733 main.go:143] libmachine: reserving static IP address...
	I1119 23:06:12.563203  142733 main.go:143] libmachine: found host DHCP lease matching {name: "ha-487903-m02", mac: "52:54:00:04:d5:70", ip: "192.168.39.191"} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:12.563229  142733 main.go:143] libmachine: skip adding static IP to network mk-ha-487903 - found existing host DHCP lease matching {name: "ha-487903-m02", mac: "52:54:00:04:d5:70", ip: "192.168.39.191"}
	I1119 23:06:12.563240  142733 main.go:143] libmachine: reserved static IP address 192.168.39.191 for domain ha-487903-m02
	I1119 23:06:12.563244  142733 main.go:143] libmachine: waiting for SSH...
	I1119 23:06:12.563250  142733 main.go:143] libmachine: Getting to WaitForSSH function...
	I1119 23:06:12.566254  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:12.566903  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:59:32 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:12.566943  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:12.567198  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:12.567490  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 23:06:12.567510  142733 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1119 23:06:15.660251  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I1119 23:06:21.740210  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: no route to host
	I1119 23:06:24.742545  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.191:22: connect: connection refused
	I1119 23:06:27.848690  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:06:27.852119  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:27.852581  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:27.852609  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:27.852840  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:06:27.853068  142733 machine.go:94] provisionDockerMachine start ...
	I1119 23:06:27.855169  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:27.855519  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:27.855541  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:27.855673  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:27.855857  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 23:06:27.855866  142733 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 23:06:27.961777  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1119 23:06:27.961813  142733 buildroot.go:166] provisioning hostname "ha-487903-m02"
	I1119 23:06:27.964686  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:27.965144  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:27.965168  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:27.965332  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:27.965514  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 23:06:27.965525  142733 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-487903-m02 && echo "ha-487903-m02" | sudo tee /etc/hostname
	I1119 23:06:28.090321  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-487903-m02
	
	I1119 23:06:28.093353  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.093734  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:28.093771  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.093968  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:28.094236  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 23:06:28.094259  142733 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-487903-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-487903-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-487903-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 23:06:28.210348  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:06:28.210378  142733 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21918-117497/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-117497/.minikube}
	I1119 23:06:28.210394  142733 buildroot.go:174] setting up certificates
	I1119 23:06:28.210406  142733 provision.go:84] configureAuth start
	I1119 23:06:28.213280  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.213787  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:28.213819  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.216188  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.216513  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:28.216537  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.216650  142733 provision.go:143] copyHostCerts
	I1119 23:06:28.216681  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:06:28.216719  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem, removing ...
	I1119 23:06:28.216731  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:06:28.216806  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem (1078 bytes)
	I1119 23:06:28.216924  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:06:28.216954  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem, removing ...
	I1119 23:06:28.216962  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:06:28.217011  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem (1123 bytes)
	I1119 23:06:28.217078  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:06:28.217105  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem, removing ...
	I1119 23:06:28.217114  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:06:28.217151  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem (1675 bytes)
	I1119 23:06:28.217219  142733 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem org=jenkins.ha-487903-m02 san=[127.0.0.1 192.168.39.191 ha-487903-m02 localhost minikube]
	I1119 23:06:28.306411  142733 provision.go:177] copyRemoteCerts
	I1119 23:06:28.306488  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 23:06:28.309423  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.309811  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:28.309838  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.309994  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:06:28.397995  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1119 23:06:28.398093  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 23:06:28.433333  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1119 23:06:28.433422  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1119 23:06:28.465202  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1119 23:06:28.465281  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 23:06:28.497619  142733 provision.go:87] duration metric: took 287.196846ms to configureAuth
	I1119 23:06:28.497657  142733 buildroot.go:189] setting minikube options for container-runtime
	I1119 23:06:28.497961  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:28.500692  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.501143  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:28.501166  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.501348  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:28.501530  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 23:06:28.501542  142733 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 23:06:28.756160  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 23:06:28.756188  142733 machine.go:97] duration metric: took 903.106737ms to provisionDockerMachine
	I1119 23:06:28.756199  142733 start.go:293] postStartSetup for "ha-487903-m02" (driver="kvm2")
	I1119 23:06:28.756221  142733 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 23:06:28.756309  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 23:06:28.759030  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.759384  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:28.759410  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.759547  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:06:28.845331  142733 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 23:06:28.850863  142733 info.go:137] Remote host: Buildroot 2025.02
	I1119 23:06:28.850908  142733 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/addons for local assets ...
	I1119 23:06:28.850968  142733 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/files for local assets ...
	I1119 23:06:28.851044  142733 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> 1213692.pem in /etc/ssl/certs
	I1119 23:06:28.851055  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /etc/ssl/certs/1213692.pem
	I1119 23:06:28.851135  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 23:06:28.863679  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:06:28.895369  142733 start.go:296] duration metric: took 139.152116ms for postStartSetup
	I1119 23:06:28.895468  142733 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1119 23:06:28.898332  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.898765  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:28.898790  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:28.898999  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:06:28.985599  142733 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I1119 23:06:28.985693  142733 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1119 23:06:29.047204  142733 fix.go:56] duration metric: took 17.860883759s for fixHost
	I1119 23:06:29.050226  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:29.050744  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:29.050767  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:29.050981  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:29.051235  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.191 22 <nil> <nil>}
	I1119 23:06:29.051247  142733 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1119 23:06:29.170064  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763593589.134247097
	
	I1119 23:06:29.170097  142733 fix.go:216] guest clock: 1763593589.134247097
	I1119 23:06:29.170109  142733 fix.go:229] Guest: 2025-11-19 23:06:29.134247097 +0000 UTC Remote: 2025-11-19 23:06:29.047235815 +0000 UTC m=+36.391479959 (delta=87.011282ms)
	I1119 23:06:29.170136  142733 fix.go:200] guest clock delta is within tolerance: 87.011282ms
	I1119 23:06:29.170145  142733 start.go:83] releasing machines lock for "ha-487903-m02", held for 17.983849826s
	I1119 23:06:29.173173  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:29.173648  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:29.173674  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:29.175909  142733 out.go:179] * Found network options:
	I1119 23:06:29.177568  142733 out.go:179]   - NO_PROXY=192.168.39.15
	W1119 23:06:29.178760  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:06:29.179292  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	I1119 23:06:29.179397  142733 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 23:06:29.179416  142733 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 23:06:29.182546  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:29.182562  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:29.183004  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:29.183038  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:29.183140  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:29.183185  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:29.183194  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:06:29.183426  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:06:29.429918  142733 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 23:06:29.437545  142733 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 23:06:29.437605  142733 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 23:06:29.459815  142733 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 23:06:29.459846  142733 start.go:496] detecting cgroup driver to use...
	I1119 23:06:29.459981  142733 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 23:06:29.484636  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 23:06:29.506049  142733 docker.go:218] disabling cri-docker service (if available) ...
	I1119 23:06:29.506131  142733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 23:06:29.529159  142733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 23:06:29.547692  142733 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 23:06:29.709216  142733 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 23:06:29.933205  142733 docker.go:234] disabling docker service ...
	I1119 23:06:29.933271  142733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 23:06:29.951748  142733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 23:06:29.967973  142733 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 23:06:30.147148  142733 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 23:06:30.300004  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 23:06:30.316471  142733 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 23:06:30.341695  142733 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 23:06:30.341768  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:30.355246  142733 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 23:06:30.355313  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:30.368901  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:30.381931  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:30.395421  142733 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 23:06:30.410190  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:30.424532  142733 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:30.447910  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:06:30.462079  142733 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 23:06:30.473475  142733 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1119 23:06:30.473555  142733 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1119 23:06:30.495385  142733 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 23:06:30.507744  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:06:30.650555  142733 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 23:06:30.778126  142733 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 23:06:30.778224  142733 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 23:06:30.784440  142733 start.go:564] Will wait 60s for crictl version
	I1119 23:06:30.784509  142733 ssh_runner.go:195] Run: which crictl
	I1119 23:06:30.789036  142733 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1119 23:06:30.834259  142733 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1119 23:06:30.834368  142733 ssh_runner.go:195] Run: crio --version
	I1119 23:06:30.866387  142733 ssh_runner.go:195] Run: crio --version
	I1119 23:06:30.901524  142733 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1119 23:06:30.902829  142733 out.go:179]   - env NO_PROXY=192.168.39.15
	I1119 23:06:30.906521  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:30.906929  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:30.906948  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:30.907113  142733 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1119 23:06:30.912354  142733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:06:30.929641  142733 mustload.go:66] Loading cluster: ha-487903
	I1119 23:06:30.929929  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:30.931609  142733 host.go:66] Checking if "ha-487903" exists ...
	I1119 23:06:30.931865  142733 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903 for IP: 192.168.39.191
	I1119 23:06:30.931896  142733 certs.go:195] generating shared ca certs ...
	I1119 23:06:30.931917  142733 certs.go:227] acquiring lock for ca certs: {Name:mk7fd1f1adfef6333505c39c1a982465562c820e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:06:30.932057  142733 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key
	I1119 23:06:30.932118  142733 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key
	I1119 23:06:30.932128  142733 certs.go:257] generating profile certs ...
	I1119 23:06:30.932195  142733 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key
	I1119 23:06:30.932244  142733 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key.4e640f1f
	I1119 23:06:30.932279  142733 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key
	I1119 23:06:30.932291  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1119 23:06:30.932302  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1119 23:06:30.932313  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1119 23:06:30.932326  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1119 23:06:30.932335  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1119 23:06:30.932348  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1119 23:06:30.932360  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1119 23:06:30.932370  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1119 23:06:30.932416  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem (1338 bytes)
	W1119 23:06:30.932442  142733 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369_empty.pem, impossibly tiny 0 bytes
	I1119 23:06:30.932451  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 23:06:30.932473  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem (1078 bytes)
	I1119 23:06:30.932493  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem (1123 bytes)
	I1119 23:06:30.932514  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem (1675 bytes)
	I1119 23:06:30.932559  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:06:30.932585  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /usr/share/ca-certificates/1213692.pem
	I1119 23:06:30.932599  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:06:30.932609  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem -> /usr/share/ca-certificates/121369.pem
	I1119 23:06:30.934682  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:30.935112  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:30.935137  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:30.935281  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:06:31.009328  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1119 23:06:31.016386  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1119 23:06:31.030245  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1119 23:06:31.035820  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1119 23:06:31.049236  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1119 23:06:31.054346  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1119 23:06:31.067895  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1119 23:06:31.073323  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1119 23:06:31.087209  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1119 23:06:31.092290  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1119 23:06:31.105480  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1119 23:06:31.110774  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1119 23:06:31.124311  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 23:06:31.157146  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 23:06:31.188112  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 23:06:31.219707  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 23:06:31.252776  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 23:06:31.288520  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 23:06:31.324027  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 23:06:31.356576  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 23:06:31.388386  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /usr/share/ca-certificates/1213692.pem (1708 bytes)
	I1119 23:06:31.418690  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 23:06:31.450428  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem --> /usr/share/ca-certificates/121369.pem (1338 bytes)
	I1119 23:06:31.480971  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1119 23:06:31.502673  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1119 23:06:31.525149  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1119 23:06:31.547365  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1119 23:06:31.569864  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1119 23:06:31.592406  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1119 23:06:31.614323  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1119 23:06:31.638212  142733 ssh_runner.go:195] Run: openssl version
	I1119 23:06:31.645456  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 23:06:31.659620  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:06:31.665114  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:06:31.665178  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:06:31.672451  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 23:06:31.686443  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121369.pem && ln -fs /usr/share/ca-certificates/121369.pem /etc/ssl/certs/121369.pem"
	I1119 23:06:31.700888  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121369.pem
	I1119 23:06:31.706357  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:54 /usr/share/ca-certificates/121369.pem
	I1119 23:06:31.706409  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121369.pem
	I1119 23:06:31.713959  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/121369.pem /etc/ssl/certs/51391683.0"
	I1119 23:06:31.727492  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1213692.pem && ln -fs /usr/share/ca-certificates/1213692.pem /etc/ssl/certs/1213692.pem"
	I1119 23:06:31.741862  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1213692.pem
	I1119 23:06:31.747549  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:54 /usr/share/ca-certificates/1213692.pem
	I1119 23:06:31.747622  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1213692.pem
	I1119 23:06:31.755354  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1213692.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 23:06:31.769594  142733 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 23:06:31.775132  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 23:06:31.783159  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 23:06:31.790685  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 23:06:31.798517  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 23:06:31.806212  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 23:06:31.814046  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 23:06:31.822145  142733 kubeadm.go:935] updating node {m02 192.168.39.191 8443 v1.34.1 crio true true} ...
	I1119 23:06:31.822259  142733 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-487903-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.191
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 23:06:31.822290  142733 kube-vip.go:115] generating kube-vip config ...
	I1119 23:06:31.822339  142733 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1119 23:06:31.849048  142733 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1119 23:06:31.849130  142733 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1119 23:06:31.849212  142733 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 23:06:31.862438  142733 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 23:06:31.862506  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1119 23:06:31.874865  142733 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1119 23:06:31.897430  142733 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 23:06:31.918586  142733 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1119 23:06:31.939534  142733 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1119 23:06:31.943930  142733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:06:31.958780  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:06:32.100156  142733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:06:32.133415  142733 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.39.191 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 23:06:32.133754  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:32.133847  142733 cache.go:107] acquiring lock: {Name:mk7073b8eca670d2c11dece9275947688dc7c859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 23:06:32.133936  142733 cache.go:115] /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1119 23:06:32.133949  142733 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 113.063ยตs
	I1119 23:06:32.133960  142733 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1119 23:06:32.133970  142733 cache.go:87] Successfully saved all images to host disk.
	I1119 23:06:32.134176  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:32.135284  142733 out.go:179] * Verifying Kubernetes components...
	I1119 23:06:32.136324  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:06:32.136777  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:06:32.139351  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:32.139927  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:06:32.139963  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:06:32.140169  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:06:32.321166  142733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:06:32.321693  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:06:32.321714  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:06:32.323895  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:06:32.326607  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:32.327119  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:06:32.327146  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:06:32.327377  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:06:32.352387  142733 kapi.go:59] client config for ha-487903: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key", CAFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1119 23:06:32.352506  142733 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.15:8443
	I1119 23:06:32.352953  142733 node_ready.go:35] waiting up to 6m0s for node "ha-487903-m02" to be "Ready" ...
	I1119 23:06:32.500722  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:06:32.500745  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:06:32.503448  142733 cache_images.go:264] succeeded pushing to: ha-487903 ha-487903-m02
	I1119 23:06:34.010161  142733 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.39.15:8443/api/v1/nodes/ha-487903-m02"
	I1119 23:06:41.592816  142733 node_ready.go:49] node "ha-487903-m02" is "Ready"
	I1119 23:06:41.592846  142733 node_ready.go:38] duration metric: took 9.239866557s for node "ha-487903-m02" to be "Ready" ...
	I1119 23:06:41.592864  142733 api_server.go:52] waiting for apiserver process to appear ...
	I1119 23:06:41.592953  142733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 23:06:42.093838  142733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 23:06:42.118500  142733 api_server.go:72] duration metric: took 9.985021825s to wait for apiserver process to appear ...
	I1119 23:06:42.118528  142733 api_server.go:88] waiting for apiserver healthz status ...
	I1119 23:06:42.118547  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:06:42.123892  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:06:42.123926  142733 api_server.go:103] status: https://192.168.39.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:06:42.619715  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:06:42.637068  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:06:42.637097  142733 api_server.go:103] status: https://192.168.39.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:06:43.118897  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:06:43.133996  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:06:43.134034  142733 api_server.go:103] status: https://192.168.39.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:06:43.618675  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:06:43.661252  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:06:43.661293  142733 api_server.go:103] status: https://192.168.39.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:06:44.118914  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:06:44.149362  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:06:44.149396  142733 api_server.go:103] status: https://192.168.39.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:06:44.618983  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:06:44.670809  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:06:44.670848  142733 api_server.go:103] status: https://192.168.39.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:06:45.119579  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:06:45.130478  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:06:45.130510  142733 api_server.go:103] status: https://192.168.39.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:06:45.619260  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:06:45.628758  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 200:
	ok
	I1119 23:06:45.631891  142733 api_server.go:141] control plane version: v1.34.1
	I1119 23:06:45.631928  142733 api_server.go:131] duration metric: took 3.513391545s to wait for apiserver health ...
	I1119 23:06:45.631939  142733 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 23:06:45.660854  142733 system_pods.go:59] 26 kube-system pods found
	I1119 23:06:45.660934  142733 system_pods.go:61] "coredns-66bc5c9577-5gt2t" [4a5bca7b-6369-4f70-b467-829bf0c07711] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:06:45.660946  142733 system_pods.go:61] "coredns-66bc5c9577-zjxkb" [9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:06:45.660955  142733 system_pods.go:61] "etcd-ha-487903" [c5b86825-5332-4f13-9566-5427deb2b911] Running
	I1119 23:06:45.660965  142733 system_pods.go:61] "etcd-ha-487903-m02" [a83259c3-4655-44c3-99ac-0f3a1c270ce1] Running
	I1119 23:06:45.660971  142733 system_pods.go:61] "etcd-ha-487903-m03" [1569e0f7-59af-4ee0-8f74-bcc25085147d] Running
	I1119 23:06:45.660978  142733 system_pods.go:61] "kindnet-9zx8x" [afdae080-5eff-4da4-8aff-726047c47eee] Running
	I1119 23:06:45.660983  142733 system_pods.go:61] "kindnet-kslhw" [4b82f473-b098-45a9-adf5-8c239353c3cd] Running
	I1119 23:06:45.660988  142733 system_pods.go:61] "kindnet-p9nqh" [1dd7683b-c7e7-487c-904a-506a24f833d8] Running
	I1119 23:06:45.660995  142733 system_pods.go:61] "kindnet-s9k2l" [98a511b0-fa67-4874-8013-e8a4a0adf5f1] Running
	I1119 23:06:45.661002  142733 system_pods.go:61] "kube-apiserver-ha-487903" [ab2ffc5e-d01f-4f0d-840e-f029adf3e0c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 23:06:45.661009  142733 system_pods.go:61] "kube-apiserver-ha-487903-m02" [cf63e467-a8e4-4ef3-910a-e35e6e75afdc] Running
	I1119 23:06:45.661014  142733 system_pods.go:61] "kube-apiserver-ha-487903-m03" [a121515e-29eb-42b5-b153-f7a0046ac5c2] Running
	I1119 23:06:45.661025  142733 system_pods.go:61] "kube-controller-manager-ha-487903" [8c9a78c7-ca0e-42cd-90e1-c4c6496de2d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 23:06:45.661033  142733 system_pods.go:61] "kube-controller-manager-ha-487903-m02" [bb173b4b-c202-408f-941e-f0af81d49115] Running
	I1119 23:06:45.661038  142733 system_pods.go:61] "kube-controller-manager-ha-487903-m03" [065ea88f-3d45-429d-8e50-ca0aa1a73bcf] Running
	I1119 23:06:45.661043  142733 system_pods.go:61] "kube-proxy-77wjf" [cff3e933-3100-44ce-9215-9db308c328d9] Running
	I1119 23:06:45.661047  142733 system_pods.go:61] "kube-proxy-fk7mh" [8743ca8a-c5e5-4da6-a983-a6191d2a852a] Running
	I1119 23:06:45.661051  142733 system_pods.go:61] "kube-proxy-tkx9r" [93e33992-972a-4c46-8ecb-143a0746d256] Running
	I1119 23:06:45.661062  142733 system_pods.go:61] "kube-proxy-zxtk6" [a2e2aa8e-2dc5-4233-a89a-da39ab61fd72] Running
	I1119 23:06:45.661066  142733 system_pods.go:61] "kube-scheduler-ha-487903" [9adc6023-3fda-4823-9f02-f9ec96db2287] Running
	I1119 23:06:45.661071  142733 system_pods.go:61] "kube-scheduler-ha-487903-m02" [45efe4be-fc82-4bc3-9c0a-a731153d7a72] Running
	I1119 23:06:45.661075  142733 system_pods.go:61] "kube-scheduler-ha-487903-m03" [bdb98728-6527-4ad5-9a13-3b7d2aba1643] Running
	I1119 23:06:45.661080  142733 system_pods.go:61] "kube-vip-ha-487903" [38307c4c-3c7f-4880-a86f-c8066b3da90a] Running
	I1119 23:06:45.661084  142733 system_pods.go:61] "kube-vip-ha-487903-m02" [a4dc6a51-edac-4815-8723-d8fb6f7b6935] Running
	I1119 23:06:45.661091  142733 system_pods.go:61] "kube-vip-ha-487903-m03" [45a6021a-3c71-47ce-9553-d75c327c14f3] Running
	I1119 23:06:45.661095  142733 system_pods.go:61] "storage-provisioner" [2f8e8800-093c-4c68-ac9c-300049543509] Running
	I1119 23:06:45.661103  142733 system_pods.go:74] duration metric: took 29.156984ms to wait for pod list to return data ...
	I1119 23:06:45.661123  142733 default_sa.go:34] waiting for default service account to be created ...
	I1119 23:06:45.681470  142733 default_sa.go:45] found service account: "default"
	I1119 23:06:45.681503  142733 default_sa.go:55] duration metric: took 20.368831ms for default service account to be created ...
	I1119 23:06:45.681516  142733 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 23:06:45.756049  142733 system_pods.go:86] 26 kube-system pods found
	I1119 23:06:45.756097  142733 system_pods.go:89] "coredns-66bc5c9577-5gt2t" [4a5bca7b-6369-4f70-b467-829bf0c07711] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:06:45.756115  142733 system_pods.go:89] "coredns-66bc5c9577-zjxkb" [9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:06:45.756124  142733 system_pods.go:89] "etcd-ha-487903" [c5b86825-5332-4f13-9566-5427deb2b911] Running
	I1119 23:06:45.756130  142733 system_pods.go:89] "etcd-ha-487903-m02" [a83259c3-4655-44c3-99ac-0f3a1c270ce1] Running
	I1119 23:06:45.756141  142733 system_pods.go:89] "etcd-ha-487903-m03" [1569e0f7-59af-4ee0-8f74-bcc25085147d] Running
	I1119 23:06:45.756153  142733 system_pods.go:89] "kindnet-9zx8x" [afdae080-5eff-4da4-8aff-726047c47eee] Running
	I1119 23:06:45.756158  142733 system_pods.go:89] "kindnet-kslhw" [4b82f473-b098-45a9-adf5-8c239353c3cd] Running
	I1119 23:06:45.756163  142733 system_pods.go:89] "kindnet-p9nqh" [1dd7683b-c7e7-487c-904a-506a24f833d8] Running
	I1119 23:06:45.756168  142733 system_pods.go:89] "kindnet-s9k2l" [98a511b0-fa67-4874-8013-e8a4a0adf5f1] Running
	I1119 23:06:45.756180  142733 system_pods.go:89] "kube-apiserver-ha-487903" [ab2ffc5e-d01f-4f0d-840e-f029adf3e0c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 23:06:45.756187  142733 system_pods.go:89] "kube-apiserver-ha-487903-m02" [cf63e467-a8e4-4ef3-910a-e35e6e75afdc] Running
	I1119 23:06:45.756193  142733 system_pods.go:89] "kube-apiserver-ha-487903-m03" [a121515e-29eb-42b5-b153-f7a0046ac5c2] Running
	I1119 23:06:45.756214  142733 system_pods.go:89] "kube-controller-manager-ha-487903" [8c9a78c7-ca0e-42cd-90e1-c4c6496de2d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 23:06:45.756220  142733 system_pods.go:89] "kube-controller-manager-ha-487903-m02" [bb173b4b-c202-408f-941e-f0af81d49115] Running
	I1119 23:06:45.756227  142733 system_pods.go:89] "kube-controller-manager-ha-487903-m03" [065ea88f-3d45-429d-8e50-ca0aa1a73bcf] Running
	I1119 23:06:45.756232  142733 system_pods.go:89] "kube-proxy-77wjf" [cff3e933-3100-44ce-9215-9db308c328d9] Running
	I1119 23:06:45.756242  142733 system_pods.go:89] "kube-proxy-fk7mh" [8743ca8a-c5e5-4da6-a983-a6191d2a852a] Running
	I1119 23:06:45.756248  142733 system_pods.go:89] "kube-proxy-tkx9r" [93e33992-972a-4c46-8ecb-143a0746d256] Running
	I1119 23:06:45.756253  142733 system_pods.go:89] "kube-proxy-zxtk6" [a2e2aa8e-2dc5-4233-a89a-da39ab61fd72] Running
	I1119 23:06:45.756258  142733 system_pods.go:89] "kube-scheduler-ha-487903" [9adc6023-3fda-4823-9f02-f9ec96db2287] Running
	I1119 23:06:45.756267  142733 system_pods.go:89] "kube-scheduler-ha-487903-m02" [45efe4be-fc82-4bc3-9c0a-a731153d7a72] Running
	I1119 23:06:45.756276  142733 system_pods.go:89] "kube-scheduler-ha-487903-m03" [bdb98728-6527-4ad5-9a13-3b7d2aba1643] Running
	I1119 23:06:45.756281  142733 system_pods.go:89] "kube-vip-ha-487903" [38307c4c-3c7f-4880-a86f-c8066b3da90a] Running
	I1119 23:06:45.756286  142733 system_pods.go:89] "kube-vip-ha-487903-m02" [a4dc6a51-edac-4815-8723-d8fb6f7b6935] Running
	I1119 23:06:45.756290  142733 system_pods.go:89] "kube-vip-ha-487903-m03" [45a6021a-3c71-47ce-9553-d75c327c14f3] Running
	I1119 23:06:45.756299  142733 system_pods.go:89] "storage-provisioner" [2f8e8800-093c-4c68-ac9c-300049543509] Running
	I1119 23:06:45.756310  142733 system_pods.go:126] duration metric: took 74.786009ms to wait for k8s-apps to be running ...
	I1119 23:06:45.756320  142733 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 23:06:45.756377  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 23:06:45.804032  142733 system_svc.go:56] duration metric: took 47.697905ms WaitForService to wait for kubelet
	I1119 23:06:45.804075  142733 kubeadm.go:587] duration metric: took 13.670605736s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 23:06:45.804108  142733 node_conditions.go:102] verifying NodePressure condition ...
	I1119 23:06:45.809115  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:06:45.809156  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:06:45.809181  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:06:45.809187  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:06:45.809193  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:06:45.809200  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:06:45.809208  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:06:45.809216  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:06:45.809222  142733 node_conditions.go:105] duration metric: took 5.108401ms to run NodePressure ...
	I1119 23:06:45.809243  142733 start.go:242] waiting for startup goroutines ...
	I1119 23:06:45.809289  142733 start.go:256] writing updated cluster config ...
	I1119 23:06:45.811415  142733 out.go:203] 
	I1119 23:06:45.813102  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:06:45.813254  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:06:45.814787  142733 out.go:179] * Starting "ha-487903-m03" control-plane node in "ha-487903" cluster
	I1119 23:06:45.815937  142733 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:06:45.815964  142733 cache.go:65] Caching tarball of preloaded images
	I1119 23:06:45.816100  142733 preload.go:238] Found /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 23:06:45.816115  142733 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 23:06:45.816268  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:06:45.816543  142733 start.go:360] acquireMachinesLock for ha-487903-m03: {Name:mk31ae761a53f3900bcf08a8c89eb1fc69e23fe3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1119 23:06:45.816612  142733 start.go:364] duration metric: took 39.245ยตs to acquireMachinesLock for "ha-487903-m03"
	I1119 23:06:45.816630  142733 start.go:96] Skipping create...Using existing machine configuration
	I1119 23:06:45.816642  142733 fix.go:54] fixHost starting: m03
	I1119 23:06:45.818510  142733 fix.go:112] recreateIfNeeded on ha-487903-m03: state=Stopped err=<nil>
	W1119 23:06:45.818540  142733 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 23:06:45.819904  142733 out.go:252] * Restarting existing kvm2 VM for "ha-487903-m03" ...
	I1119 23:06:45.819950  142733 main.go:143] libmachine: starting domain...
	I1119 23:06:45.819961  142733 main.go:143] libmachine: ensuring networks are active...
	I1119 23:06:45.820828  142733 main.go:143] libmachine: Ensuring network default is active
	I1119 23:06:45.821278  142733 main.go:143] libmachine: Ensuring network mk-ha-487903 is active
	I1119 23:06:45.821805  142733 main.go:143] libmachine: getting domain XML...
	I1119 23:06:45.823105  142733 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>ha-487903-m03</name>
	  <uuid>e9ddbb3b-f8b5-4cd4-8c27-cb1452f23fd2</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/ha-487903-m03.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:b3:68:3d'/>
	      <source network='mk-ha-487903'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:7a:90:da'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1119 23:06:47.444391  142733 main.go:143] libmachine: waiting for domain to start...
	I1119 23:06:47.445887  142733 main.go:143] libmachine: domain is now running
	I1119 23:06:47.445908  142733 main.go:143] libmachine: waiting for IP...
	I1119 23:06:47.446706  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:06:47.447357  142733 main.go:143] libmachine: domain ha-487903-m03 has current primary IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:06:47.447380  142733 main.go:143] libmachine: found domain IP: 192.168.39.160
	I1119 23:06:47.447388  142733 main.go:143] libmachine: reserving static IP address...
	I1119 23:06:47.447950  142733 main.go:143] libmachine: found host DHCP lease matching {name: "ha-487903-m03", mac: "52:54:00:b3:68:3d", ip: "192.168.39.160"} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:50:12 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:06:47.447985  142733 main.go:143] libmachine: skip adding static IP to network mk-ha-487903 - found existing host DHCP lease matching {name: "ha-487903-m03", mac: "52:54:00:b3:68:3d", ip: "192.168.39.160"}
	I1119 23:06:47.447998  142733 main.go:143] libmachine: reserved static IP address 192.168.39.160 for domain ha-487903-m03
	I1119 23:06:47.448003  142733 main.go:143] libmachine: waiting for SSH...
	I1119 23:06:47.448010  142733 main.go:143] libmachine: Getting to WaitForSSH function...
	I1119 23:06:47.450788  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:06:47.451222  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:50:12 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:06:47.451253  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:06:47.451441  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:06:47.451661  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1119 23:06:47.451673  142733 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1119 23:06:50.540171  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.160:22: connect: no route to host
	I1119 23:06:56.620202  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.160:22: connect: no route to host
	I1119 23:06:59.621964  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.160:22: connect: connection refused
	I1119 23:07:02.732773  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:07:02.736628  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:02.737046  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:02.737076  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:02.737371  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:07:02.737615  142733 machine.go:94] provisionDockerMachine start ...
	I1119 23:07:02.740024  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:02.740530  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:02.740555  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:02.740752  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:02.741040  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1119 23:07:02.741054  142733 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 23:07:02.852322  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1119 23:07:02.852355  142733 buildroot.go:166] provisioning hostname "ha-487903-m03"
	I1119 23:07:02.855519  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:02.856083  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:02.856112  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:02.856309  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:02.856556  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1119 23:07:02.856572  142733 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-487903-m03 && echo "ha-487903-m03" | sudo tee /etc/hostname
	I1119 23:07:02.990322  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-487903-m03
	
	I1119 23:07:02.993714  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:02.994202  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:02.994233  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:02.994405  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:02.994627  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1119 23:07:02.994651  142733 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-487903-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-487903-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-487903-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 23:07:03.118189  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:07:03.118221  142733 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21918-117497/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-117497/.minikube}
	I1119 23:07:03.118237  142733 buildroot.go:174] setting up certificates
	I1119 23:07:03.118248  142733 provision.go:84] configureAuth start
	I1119 23:07:03.121128  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.121630  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:03.121656  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.124221  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.124569  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:03.124592  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.124715  142733 provision.go:143] copyHostCerts
	I1119 23:07:03.124748  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:07:03.124787  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem, removing ...
	I1119 23:07:03.124797  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:07:03.124892  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem (1675 bytes)
	I1119 23:07:03.125005  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:07:03.125037  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem, removing ...
	I1119 23:07:03.125047  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:07:03.125090  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem (1078 bytes)
	I1119 23:07:03.125160  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:07:03.125188  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem, removing ...
	I1119 23:07:03.125198  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:07:03.125238  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem (1123 bytes)
	I1119 23:07:03.125306  142733 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem org=jenkins.ha-487903-m03 san=[127.0.0.1 192.168.39.160 ha-487903-m03 localhost minikube]
	I1119 23:07:03.484960  142733 provision.go:177] copyRemoteCerts
	I1119 23:07:03.485022  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 23:07:03.487560  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.488008  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:03.488032  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.488178  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/id_rsa Username:docker}
	I1119 23:07:03.574034  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1119 23:07:03.574117  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 23:07:03.604129  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1119 23:07:03.604216  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1119 23:07:03.635162  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1119 23:07:03.635235  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 23:07:03.668358  142733 provision.go:87] duration metric: took 550.091154ms to configureAuth
	I1119 23:07:03.668387  142733 buildroot.go:189] setting minikube options for container-runtime
	I1119 23:07:03.668643  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:03.671745  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.672214  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:03.672242  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.672395  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:03.672584  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1119 23:07:03.672599  142733 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 23:07:03.950762  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 23:07:03.950792  142733 machine.go:97] duration metric: took 1.213162195s to provisionDockerMachine
	I1119 23:07:03.950807  142733 start.go:293] postStartSetup for "ha-487903-m03" (driver="kvm2")
	I1119 23:07:03.950821  142733 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 23:07:03.950908  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 23:07:03.954010  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.954449  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:03.954472  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:03.954609  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/id_rsa Username:docker}
	I1119 23:07:04.043080  142733 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 23:07:04.048534  142733 info.go:137] Remote host: Buildroot 2025.02
	I1119 23:07:04.048567  142733 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/addons for local assets ...
	I1119 23:07:04.048645  142733 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/files for local assets ...
	I1119 23:07:04.048729  142733 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> 1213692.pem in /etc/ssl/certs
	I1119 23:07:04.048741  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /etc/ssl/certs/1213692.pem
	I1119 23:07:04.048850  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 23:07:04.062005  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:07:04.095206  142733 start.go:296] duration metric: took 144.382125ms for postStartSetup
	I1119 23:07:04.095293  142733 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1119 23:07:04.097927  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.098314  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:04.098337  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.098469  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/id_rsa Username:docker}
	I1119 23:07:04.187620  142733 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I1119 23:07:04.187695  142733 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1119 23:07:04.250288  142733 fix.go:56] duration metric: took 18.433638518s for fixHost
	I1119 23:07:04.253813  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.254395  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:04.254423  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.254650  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:04.254923  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1119 23:07:04.254938  142733 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1119 23:07:04.407951  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763593624.369608325
	
	I1119 23:07:04.407981  142733 fix.go:216] guest clock: 1763593624.369608325
	I1119 23:07:04.407992  142733 fix.go:229] Guest: 2025-11-19 23:07:04.369608325 +0000 UTC Remote: 2025-11-19 23:07:04.250316644 +0000 UTC m=+71.594560791 (delta=119.291681ms)
	I1119 23:07:04.408018  142733 fix.go:200] guest clock delta is within tolerance: 119.291681ms
	I1119 23:07:04.408026  142733 start.go:83] releasing machines lock for "ha-487903-m03", held for 18.591403498s
	I1119 23:07:04.411093  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.411490  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:04.411518  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.413431  142733 out.go:179] * Found network options:
	I1119 23:07:04.414774  142733 out.go:179]   - NO_PROXY=192.168.39.15,192.168.39.191
	W1119 23:07:04.415854  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:07:04.415891  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:07:04.416317  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:07:04.416348  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	I1119 23:07:04.416422  142733 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 23:07:04.416436  142733 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 23:07:04.419695  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.419745  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.420204  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:04.420228  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.420310  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:04.420352  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:04.420397  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/id_rsa Username:docker}
	I1119 23:07:04.420643  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/id_rsa Username:docker}
	I1119 23:07:04.657635  142733 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 23:07:04.665293  142733 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 23:07:04.665372  142733 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 23:07:04.689208  142733 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 23:07:04.689244  142733 start.go:496] detecting cgroup driver to use...
	I1119 23:07:04.689352  142733 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 23:07:04.714215  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 23:07:04.733166  142733 docker.go:218] disabling cri-docker service (if available) ...
	I1119 23:07:04.733238  142733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 23:07:04.756370  142733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 23:07:04.778280  142733 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 23:07:04.943140  142733 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 23:07:05.174139  142733 docker.go:234] disabling docker service ...
	I1119 23:07:05.174230  142733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 23:07:05.192652  142733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 23:07:05.219388  142733 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 23:07:05.383745  142733 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 23:07:05.538084  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 23:07:05.555554  142733 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 23:07:05.579503  142733 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 23:07:05.579567  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:05.593464  142733 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 23:07:05.593530  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:05.609133  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:05.624066  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:05.637817  142733 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 23:07:05.653008  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:05.666833  142733 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:05.691556  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:05.705398  142733 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 23:07:05.717404  142733 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1119 23:07:05.717480  142733 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1119 23:07:05.740569  142733 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 23:07:05.753510  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:07:05.907119  142733 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 23:07:06.048396  142733 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 23:07:06.048486  142733 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 23:07:06.055638  142733 start.go:564] Will wait 60s for crictl version
	I1119 23:07:06.055719  142733 ssh_runner.go:195] Run: which crictl
	I1119 23:07:06.061562  142733 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1119 23:07:06.110271  142733 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1119 23:07:06.110342  142733 ssh_runner.go:195] Run: crio --version
	I1119 23:07:06.146231  142733 ssh_runner.go:195] Run: crio --version
	I1119 23:07:06.178326  142733 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1119 23:07:06.179543  142733 out.go:179]   - env NO_PROXY=192.168.39.15
	I1119 23:07:06.180760  142733 out.go:179]   - env NO_PROXY=192.168.39.15,192.168.39.191
	I1119 23:07:06.184561  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:06.184934  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:06.184957  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:06.185144  142733 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1119 23:07:06.190902  142733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:07:06.207584  142733 mustload.go:66] Loading cluster: ha-487903
	I1119 23:07:06.207839  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:06.209435  142733 host.go:66] Checking if "ha-487903" exists ...
	I1119 23:07:06.209634  142733 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903 for IP: 192.168.39.160
	I1119 23:07:06.209644  142733 certs.go:195] generating shared ca certs ...
	I1119 23:07:06.209656  142733 certs.go:227] acquiring lock for ca certs: {Name:mk7fd1f1adfef6333505c39c1a982465562c820e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:07:06.209760  142733 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key
	I1119 23:07:06.209804  142733 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key
	I1119 23:07:06.209811  142733 certs.go:257] generating profile certs ...
	I1119 23:07:06.209893  142733 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key
	I1119 23:07:06.209959  142733 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key.0aa3aad5
	I1119 23:07:06.210018  142733 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key
	I1119 23:07:06.210035  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1119 23:07:06.210054  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1119 23:07:06.210067  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1119 23:07:06.210080  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1119 23:07:06.210091  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1119 23:07:06.210102  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1119 23:07:06.210114  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1119 23:07:06.210126  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1119 23:07:06.210182  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem (1338 bytes)
	W1119 23:07:06.210223  142733 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369_empty.pem, impossibly tiny 0 bytes
	I1119 23:07:06.210235  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 23:07:06.210266  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem (1078 bytes)
	I1119 23:07:06.210291  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem (1123 bytes)
	I1119 23:07:06.210312  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem (1675 bytes)
	I1119 23:07:06.210372  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:07:06.210412  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:07:06.210426  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem -> /usr/share/ca-certificates/121369.pem
	I1119 23:07:06.210444  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /usr/share/ca-certificates/1213692.pem
	I1119 23:07:06.213240  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:07:06.213640  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:07:06.213661  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:07:06.213778  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:07:06.286328  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1119 23:07:06.292502  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1119 23:07:06.306380  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1119 23:07:06.311916  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1119 23:07:06.325372  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1119 23:07:06.331268  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1119 23:07:06.346732  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1119 23:07:06.351946  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1119 23:07:06.366848  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1119 23:07:06.372483  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1119 23:07:06.389518  142733 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1119 23:07:06.395938  142733 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1119 23:07:06.409456  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 23:07:06.450401  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 23:07:06.486719  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 23:07:06.523798  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 23:07:06.561368  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 23:07:06.599512  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 23:07:06.634946  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 23:07:06.670031  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 23:07:06.704068  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 23:07:06.735677  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem --> /usr/share/ca-certificates/121369.pem (1338 bytes)
	I1119 23:07:06.768990  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /usr/share/ca-certificates/1213692.pem (1708 bytes)
	I1119 23:07:06.806854  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1119 23:07:06.832239  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1119 23:07:06.856375  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1119 23:07:06.879310  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1119 23:07:06.902404  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1119 23:07:06.927476  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1119 23:07:06.952223  142733 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1119 23:07:06.974196  142733 ssh_runner.go:195] Run: openssl version
	I1119 23:07:06.981644  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 23:07:06.999412  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:07:07.005373  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:07:07.005446  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:07:07.013895  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 23:07:07.031130  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121369.pem && ln -fs /usr/share/ca-certificates/121369.pem /etc/ssl/certs/121369.pem"
	I1119 23:07:07.046043  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121369.pem
	I1119 23:07:07.051937  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:54 /usr/share/ca-certificates/121369.pem
	I1119 23:07:07.052014  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121369.pem
	I1119 23:07:07.059543  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/121369.pem /etc/ssl/certs/51391683.0"
	I1119 23:07:07.078500  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1213692.pem && ln -fs /usr/share/ca-certificates/1213692.pem /etc/ssl/certs/1213692.pem"
	I1119 23:07:07.093375  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1213692.pem
	I1119 23:07:07.099508  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:54 /usr/share/ca-certificates/1213692.pem
	I1119 23:07:07.099578  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1213692.pem
	I1119 23:07:07.107551  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1213692.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 23:07:07.123243  142733 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 23:07:07.129696  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 23:07:07.137849  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 23:07:07.145809  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 23:07:07.153731  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 23:07:07.161120  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 23:07:07.168309  142733 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 23:07:07.176142  142733 kubeadm.go:935] updating node {m03 192.168.39.160 8443 v1.34.1 crio true true} ...
	I1119 23:07:07.176256  142733 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-487903-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.160
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 23:07:07.176285  142733 kube-vip.go:115] generating kube-vip config ...
	I1119 23:07:07.176329  142733 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1119 23:07:07.203479  142733 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1119 23:07:07.203570  142733 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1119 23:07:07.203646  142733 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 23:07:07.217413  142733 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 23:07:07.217503  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1119 23:07:07.230746  142733 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1119 23:07:07.256658  142733 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 23:07:07.282507  142733 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1119 23:07:07.305975  142733 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1119 23:07:07.311016  142733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:07:07.328648  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:07:07.494364  142733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:07:07.517777  142733 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.39.160 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 23:07:07.518159  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:07.518271  142733 cache.go:107] acquiring lock: {Name:mk7073b8eca670d2c11dece9275947688dc7c859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 23:07:07.518379  142733 cache.go:115] /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1119 23:07:07.518395  142733 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 133.678ยตs
	I1119 23:07:07.518407  142733 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1119 23:07:07.518421  142733 cache.go:87] Successfully saved all images to host disk.
	I1119 23:07:07.518647  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:07.520684  142733 out.go:179] * Verifying Kubernetes components...
	I1119 23:07:07.520832  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:07:07.521966  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:07:07.523804  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:07:07.524372  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:07:07.524416  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:07:07.524599  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:07:07.723792  142733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:07:07.724326  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:07:07.724350  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:07:07.726364  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:07:07.728774  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:07:07.729239  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:07:07.729270  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:07:07.729424  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:07:07.746212  142733 kapi.go:59] client config for ha-487903: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key", CAFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1119 23:07:07.746278  142733 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.15:8443
	I1119 23:07:07.746586  142733 node_ready.go:35] waiting up to 6m0s for node "ha-487903-m03" to be "Ready" ...
	I1119 23:07:07.858504  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:07:07.858530  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:07:07.860355  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:07:07.862516  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:07.862974  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:07.863000  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:07.863200  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/id_rsa Username:docker}
	I1119 23:07:08.011441  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:07:08.011468  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:07:08.013393  142733 cache_images.go:264] succeeded pushing to: ha-487903 ha-487903-m02 ha-487903-m03
	W1119 23:07:09.751904  142733 node_ready.go:57] node "ha-487903-m03" has "Ready":"Unknown" status (will retry)
	W1119 23:07:12.252353  142733 node_ready.go:57] node "ha-487903-m03" has "Ready":"Unknown" status (will retry)
	W1119 23:07:14.254075  142733 node_ready.go:57] node "ha-487903-m03" has "Ready":"Unknown" status (will retry)
	W1119 23:07:16.256443  142733 node_ready.go:57] node "ha-487903-m03" has "Ready":"Unknown" status (will retry)
	W1119 23:07:18.752485  142733 node_ready.go:57] node "ha-487903-m03" has "Ready":"Unknown" status (will retry)
	I1119 23:07:19.751738  142733 node_ready.go:49] node "ha-487903-m03" is "Ready"
	I1119 23:07:19.751783  142733 node_ready.go:38] duration metric: took 12.005173883s for node "ha-487903-m03" to be "Ready" ...
	I1119 23:07:19.751803  142733 api_server.go:52] waiting for apiserver process to appear ...
	I1119 23:07:19.751911  142733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 23:07:19.833604  142733 api_server.go:72] duration metric: took 12.315777974s to wait for apiserver process to appear ...
	I1119 23:07:19.833635  142733 api_server.go:88] waiting for apiserver healthz status ...
	I1119 23:07:19.833668  142733 api_server.go:253] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I1119 23:07:19.841482  142733 api_server.go:279] https://192.168.39.15:8443/healthz returned 200:
	ok
	I1119 23:07:19.842905  142733 api_server.go:141] control plane version: v1.34.1
	I1119 23:07:19.842932  142733 api_server.go:131] duration metric: took 9.287176ms to wait for apiserver health ...
	I1119 23:07:19.842951  142733 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 23:07:19.855636  142733 system_pods.go:59] 26 kube-system pods found
	I1119 23:07:19.855671  142733 system_pods.go:61] "coredns-66bc5c9577-5gt2t" [4a5bca7b-6369-4f70-b467-829bf0c07711] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:07:19.855679  142733 system_pods.go:61] "coredns-66bc5c9577-zjxkb" [9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5] Running
	I1119 23:07:19.855689  142733 system_pods.go:61] "etcd-ha-487903" [c5b86825-5332-4f13-9566-5427deb2b911] Running
	I1119 23:07:19.855695  142733 system_pods.go:61] "etcd-ha-487903-m02" [a83259c3-4655-44c3-99ac-0f3a1c270ce1] Running
	I1119 23:07:19.855700  142733 system_pods.go:61] "etcd-ha-487903-m03" [1569e0f7-59af-4ee0-8f74-bcc25085147d] Running
	I1119 23:07:19.855705  142733 system_pods.go:61] "kindnet-9zx8x" [afdae080-5eff-4da4-8aff-726047c47eee] Running
	I1119 23:07:19.855710  142733 system_pods.go:61] "kindnet-kslhw" [4b82f473-b098-45a9-adf5-8c239353c3cd] Running
	I1119 23:07:19.855714  142733 system_pods.go:61] "kindnet-p9nqh" [1dd7683b-c7e7-487c-904a-506a24f833d8] Running
	I1119 23:07:19.855724  142733 system_pods.go:61] "kindnet-s9k2l" [98a511b0-fa67-4874-8013-e8a4a0adf5f1] Running
	I1119 23:07:19.855733  142733 system_pods.go:61] "kube-apiserver-ha-487903" [ab2ffc5e-d01f-4f0d-840e-f029adf3e0c1] Running
	I1119 23:07:19.855738  142733 system_pods.go:61] "kube-apiserver-ha-487903-m02" [cf63e467-a8e4-4ef3-910a-e35e6e75afdc] Running
	I1119 23:07:19.855743  142733 system_pods.go:61] "kube-apiserver-ha-487903-m03" [a121515e-29eb-42b5-b153-f7a0046ac5c2] Running
	I1119 23:07:19.855747  142733 system_pods.go:61] "kube-controller-manager-ha-487903" [8c9a78c7-ca0e-42cd-90e1-c4c6496de2d8] Running
	I1119 23:07:19.855753  142733 system_pods.go:61] "kube-controller-manager-ha-487903-m02" [bb173b4b-c202-408f-941e-f0af81d49115] Running
	I1119 23:07:19.855760  142733 system_pods.go:61] "kube-controller-manager-ha-487903-m03" [065ea88f-3d45-429d-8e50-ca0aa1a73bcf] Running
	I1119 23:07:19.855764  142733 system_pods.go:61] "kube-proxy-77wjf" [cff3e933-3100-44ce-9215-9db308c328d9] Running
	I1119 23:07:19.855769  142733 system_pods.go:61] "kube-proxy-fk7mh" [8743ca8a-c5e5-4da6-a983-a6191d2a852a] Running
	I1119 23:07:19.855774  142733 system_pods.go:61] "kube-proxy-tkx9r" [93e33992-972a-4c46-8ecb-143a0746d256] Running
	I1119 23:07:19.855778  142733 system_pods.go:61] "kube-proxy-zxtk6" [a2e2aa8e-2dc5-4233-a89a-da39ab61fd72] Running
	I1119 23:07:19.855783  142733 system_pods.go:61] "kube-scheduler-ha-487903" [9adc6023-3fda-4823-9f02-f9ec96db2287] Running
	I1119 23:07:19.855793  142733 system_pods.go:61] "kube-scheduler-ha-487903-m02" [45efe4be-fc82-4bc3-9c0a-a731153d7a72] Running
	I1119 23:07:19.855797  142733 system_pods.go:61] "kube-scheduler-ha-487903-m03" [bdb98728-6527-4ad5-9a13-3b7d2aba1643] Running
	I1119 23:07:19.855802  142733 system_pods.go:61] "kube-vip-ha-487903" [38307c4c-3c7f-4880-a86f-c8066b3da90a] Running
	I1119 23:07:19.855806  142733 system_pods.go:61] "kube-vip-ha-487903-m02" [a4dc6a51-edac-4815-8723-d8fb6f7b6935] Running
	I1119 23:07:19.855814  142733 system_pods.go:61] "kube-vip-ha-487903-m03" [45a6021a-3c71-47ce-9553-d75c327c14f3] Running
	I1119 23:07:19.855818  142733 system_pods.go:61] "storage-provisioner" [2f8e8800-093c-4c68-ac9c-300049543509] Running
	I1119 23:07:19.855827  142733 system_pods.go:74] duration metric: took 12.86809ms to wait for pod list to return data ...
	I1119 23:07:19.855842  142733 default_sa.go:34] waiting for default service account to be created ...
	I1119 23:07:19.860573  142733 default_sa.go:45] found service account: "default"
	I1119 23:07:19.860597  142733 default_sa.go:55] duration metric: took 4.749483ms for default service account to be created ...
	I1119 23:07:19.860606  142733 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 23:07:19.870790  142733 system_pods.go:86] 26 kube-system pods found
	I1119 23:07:19.870825  142733 system_pods.go:89] "coredns-66bc5c9577-5gt2t" [4a5bca7b-6369-4f70-b467-829bf0c07711] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:07:19.870831  142733 system_pods.go:89] "coredns-66bc5c9577-zjxkb" [9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5] Running
	I1119 23:07:19.870836  142733 system_pods.go:89] "etcd-ha-487903" [c5b86825-5332-4f13-9566-5427deb2b911] Running
	I1119 23:07:19.870840  142733 system_pods.go:89] "etcd-ha-487903-m02" [a83259c3-4655-44c3-99ac-0f3a1c270ce1] Running
	I1119 23:07:19.870843  142733 system_pods.go:89] "etcd-ha-487903-m03" [1569e0f7-59af-4ee0-8f74-bcc25085147d] Running
	I1119 23:07:19.870847  142733 system_pods.go:89] "kindnet-9zx8x" [afdae080-5eff-4da4-8aff-726047c47eee] Running
	I1119 23:07:19.870851  142733 system_pods.go:89] "kindnet-kslhw" [4b82f473-b098-45a9-adf5-8c239353c3cd] Running
	I1119 23:07:19.870854  142733 system_pods.go:89] "kindnet-p9nqh" [1dd7683b-c7e7-487c-904a-506a24f833d8] Running
	I1119 23:07:19.870857  142733 system_pods.go:89] "kindnet-s9k2l" [98a511b0-fa67-4874-8013-e8a4a0adf5f1] Running
	I1119 23:07:19.870861  142733 system_pods.go:89] "kube-apiserver-ha-487903" [ab2ffc5e-d01f-4f0d-840e-f029adf3e0c1] Running
	I1119 23:07:19.870865  142733 system_pods.go:89] "kube-apiserver-ha-487903-m02" [cf63e467-a8e4-4ef3-910a-e35e6e75afdc] Running
	I1119 23:07:19.870870  142733 system_pods.go:89] "kube-apiserver-ha-487903-m03" [a121515e-29eb-42b5-b153-f7a0046ac5c2] Running
	I1119 23:07:19.870895  142733 system_pods.go:89] "kube-controller-manager-ha-487903" [8c9a78c7-ca0e-42cd-90e1-c4c6496de2d8] Running
	I1119 23:07:19.870902  142733 system_pods.go:89] "kube-controller-manager-ha-487903-m02" [bb173b4b-c202-408f-941e-f0af81d49115] Running
	I1119 23:07:19.870911  142733 system_pods.go:89] "kube-controller-manager-ha-487903-m03" [065ea88f-3d45-429d-8e50-ca0aa1a73bcf] Running
	I1119 23:07:19.870916  142733 system_pods.go:89] "kube-proxy-77wjf" [cff3e933-3100-44ce-9215-9db308c328d9] Running
	I1119 23:07:19.870924  142733 system_pods.go:89] "kube-proxy-fk7mh" [8743ca8a-c5e5-4da6-a983-a6191d2a852a] Running
	I1119 23:07:19.870929  142733 system_pods.go:89] "kube-proxy-tkx9r" [93e33992-972a-4c46-8ecb-143a0746d256] Running
	I1119 23:07:19.870936  142733 system_pods.go:89] "kube-proxy-zxtk6" [a2e2aa8e-2dc5-4233-a89a-da39ab61fd72] Running
	I1119 23:07:19.870941  142733 system_pods.go:89] "kube-scheduler-ha-487903" [9adc6023-3fda-4823-9f02-f9ec96db2287] Running
	I1119 23:07:19.870946  142733 system_pods.go:89] "kube-scheduler-ha-487903-m02" [45efe4be-fc82-4bc3-9c0a-a731153d7a72] Running
	I1119 23:07:19.870953  142733 system_pods.go:89] "kube-scheduler-ha-487903-m03" [bdb98728-6527-4ad5-9a13-3b7d2aba1643] Running
	I1119 23:07:19.870957  142733 system_pods.go:89] "kube-vip-ha-487903" [38307c4c-3c7f-4880-a86f-c8066b3da90a] Running
	I1119 23:07:19.870963  142733 system_pods.go:89] "kube-vip-ha-487903-m02" [a4dc6a51-edac-4815-8723-d8fb6f7b6935] Running
	I1119 23:07:19.870966  142733 system_pods.go:89] "kube-vip-ha-487903-m03" [45a6021a-3c71-47ce-9553-d75c327c14f3] Running
	I1119 23:07:19.870969  142733 system_pods.go:89] "storage-provisioner" [2f8e8800-093c-4c68-ac9c-300049543509] Running
	I1119 23:07:19.870982  142733 system_pods.go:126] duration metric: took 10.369487ms to wait for k8s-apps to be running ...
	I1119 23:07:19.870995  142733 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 23:07:19.871070  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 23:07:19.923088  142733 system_svc.go:56] duration metric: took 52.080591ms WaitForService to wait for kubelet
	I1119 23:07:19.923137  142733 kubeadm.go:587] duration metric: took 12.405311234s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 23:07:19.923168  142733 node_conditions.go:102] verifying NodePressure condition ...
	I1119 23:07:19.930259  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:07:19.930299  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:07:19.930316  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:07:19.930323  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:07:19.930329  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:07:19.930334  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:07:19.930343  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:07:19.930352  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:07:19.930359  142733 node_conditions.go:105] duration metric: took 7.184829ms to run NodePressure ...
	I1119 23:07:19.930381  142733 start.go:242] waiting for startup goroutines ...
	I1119 23:07:19.930425  142733 start.go:256] writing updated cluster config ...
	I1119 23:07:19.932180  142733 out.go:203] 
	I1119 23:07:19.934088  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:19.934226  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:07:19.935991  142733 out.go:179] * Starting "ha-487903-m04" worker node in "ha-487903" cluster
	I1119 23:07:19.937566  142733 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 23:07:19.937584  142733 cache.go:65] Caching tarball of preloaded images
	I1119 23:07:19.937693  142733 preload.go:238] Found /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 23:07:19.937716  142733 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 23:07:19.937810  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:07:19.938027  142733 start.go:360] acquireMachinesLock for ha-487903-m04: {Name:mk31ae761a53f3900bcf08a8c89eb1fc69e23fe3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1119 23:07:19.938076  142733 start.go:364] duration metric: took 28.868ยตs to acquireMachinesLock for "ha-487903-m04"
	I1119 23:07:19.938095  142733 start.go:96] Skipping create...Using existing machine configuration
	I1119 23:07:19.938109  142733 fix.go:54] fixHost starting: m04
	I1119 23:07:19.940296  142733 fix.go:112] recreateIfNeeded on ha-487903-m04: state=Stopped err=<nil>
	W1119 23:07:19.940327  142733 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 23:07:19.942168  142733 out.go:252] * Restarting existing kvm2 VM for "ha-487903-m04" ...
	I1119 23:07:19.942220  142733 main.go:143] libmachine: starting domain...
	I1119 23:07:19.942265  142733 main.go:143] libmachine: ensuring networks are active...
	I1119 23:07:19.943145  142733 main.go:143] libmachine: Ensuring network default is active
	I1119 23:07:19.943566  142733 main.go:143] libmachine: Ensuring network mk-ha-487903 is active
	I1119 23:07:19.944170  142733 main.go:143] libmachine: getting domain XML...
	I1119 23:07:19.945811  142733 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>ha-487903-m04</name>
	  <uuid>2ce148a1-b982-46f6-ada0-6a5a5b14ddce</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/ha-487903-m04.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:eb:f3:c3'/>
	      <source network='mk-ha-487903'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:03:3a:d4'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1119 23:07:21.541216  142733 main.go:143] libmachine: waiting for domain to start...
	I1119 23:07:21.542947  142733 main.go:143] libmachine: domain is now running
	I1119 23:07:21.542968  142733 main.go:143] libmachine: waiting for IP...
	I1119 23:07:21.543929  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:21.544529  142733 main.go:143] libmachine: domain ha-487903-m04 has current primary IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:21.544546  142733 main.go:143] libmachine: found domain IP: 192.168.39.187
	I1119 23:07:21.544554  142733 main.go:143] libmachine: reserving static IP address...
	I1119 23:07:21.545091  142733 main.go:143] libmachine: found host DHCP lease matching {name: "ha-487903-m04", mac: "52:54:00:eb:f3:c3", ip: "192.168.39.187"} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:51:45 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:21.545120  142733 main.go:143] libmachine: skip adding static IP to network mk-ha-487903 - found existing host DHCP lease matching {name: "ha-487903-m04", mac: "52:54:00:eb:f3:c3", ip: "192.168.39.187"}
	I1119 23:07:21.545133  142733 main.go:143] libmachine: reserved static IP address 192.168.39.187 for domain ha-487903-m04
	I1119 23:07:21.545137  142733 main.go:143] libmachine: waiting for SSH...
	I1119 23:07:21.545142  142733 main.go:143] libmachine: Getting to WaitForSSH function...
	I1119 23:07:21.547650  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:21.548218  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:51:45 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:21.548249  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:21.548503  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:21.548718  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1119 23:07:21.548730  142733 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1119 23:07:24.652184  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.187:22: connect: no route to host
	I1119 23:07:30.732203  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.187:22: connect: no route to host
	I1119 23:07:34.764651  142733 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.187:22: connect: connection refused
	I1119 23:07:37.880284  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:07:37.884099  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:37.884565  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:37.884591  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:37.884934  142733 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/config.json ...
	I1119 23:07:37.885280  142733 machine.go:94] provisionDockerMachine start ...
	I1119 23:07:37.887971  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:37.888368  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:37.888391  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:37.888542  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:37.888720  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1119 23:07:37.888729  142733 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 23:07:37.998350  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1119 23:07:37.998394  142733 buildroot.go:166] provisioning hostname "ha-487903-m04"
	I1119 23:07:38.002080  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.002563  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:38.002588  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.002794  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:38.003043  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1119 23:07:38.003057  142733 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-487903-m04 && echo "ha-487903-m04" | sudo tee /etc/hostname
	I1119 23:07:38.135349  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-487903-m04
	
	I1119 23:07:38.138757  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.139357  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:38.139392  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.139707  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:38.140010  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1119 23:07:38.140053  142733 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-487903-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-487903-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-487903-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 23:07:38.264087  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:07:38.264126  142733 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21918-117497/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-117497/.minikube}
	I1119 23:07:38.264149  142733 buildroot.go:174] setting up certificates
	I1119 23:07:38.264161  142733 provision.go:84] configureAuth start
	I1119 23:07:38.267541  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.268176  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:38.268215  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.270752  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.271136  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:38.271156  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.271421  142733 provision.go:143] copyHostCerts
	I1119 23:07:38.271453  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:07:38.271483  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem, removing ...
	I1119 23:07:38.271492  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:07:38.271573  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem (1675 bytes)
	I1119 23:07:38.271646  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:07:38.271664  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem, removing ...
	I1119 23:07:38.271667  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:07:38.271693  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem (1078 bytes)
	I1119 23:07:38.271735  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:07:38.271751  142733 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem, removing ...
	I1119 23:07:38.271757  142733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:07:38.271779  142733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem (1123 bytes)
	I1119 23:07:38.271823  142733 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem org=jenkins.ha-487903-m04 san=[127.0.0.1 192.168.39.187 ha-487903-m04 localhost minikube]
	I1119 23:07:38.932314  142733 provision.go:177] copyRemoteCerts
	I1119 23:07:38.932380  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 23:07:38.935348  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.935810  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:38.935836  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:38.936006  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/id_rsa Username:docker}
	I1119 23:07:39.025808  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1119 23:07:39.025896  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 23:07:39.060783  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1119 23:07:39.060907  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1119 23:07:39.093470  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1119 23:07:39.093540  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1119 23:07:39.126116  142733 provision.go:87] duration metric: took 861.930238ms to configureAuth
	I1119 23:07:39.126158  142733 buildroot.go:189] setting minikube options for container-runtime
	I1119 23:07:39.126455  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:39.129733  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.130126  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:39.130155  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.130312  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:39.130560  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1119 23:07:39.130587  142733 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 23:07:39.433038  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 23:07:39.433084  142733 machine.go:97] duration metric: took 1.547777306s to provisionDockerMachine
	I1119 23:07:39.433101  142733 start.go:293] postStartSetup for "ha-487903-m04" (driver="kvm2")
	I1119 23:07:39.433114  142733 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 23:07:39.433178  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 23:07:39.436063  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.436658  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:39.436689  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.436985  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/id_rsa Username:docker}
	I1119 23:07:39.524100  142733 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 23:07:39.529723  142733 info.go:137] Remote host: Buildroot 2025.02
	I1119 23:07:39.529752  142733 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/addons for local assets ...
	I1119 23:07:39.529847  142733 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/files for local assets ...
	I1119 23:07:39.529973  142733 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> 1213692.pem in /etc/ssl/certs
	I1119 23:07:39.529988  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /etc/ssl/certs/1213692.pem
	I1119 23:07:39.530101  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 23:07:39.544274  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:07:39.576039  142733 start.go:296] duration metric: took 142.916645ms for postStartSetup
	I1119 23:07:39.576112  142733 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1119 23:07:39.578695  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.579305  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:39.579334  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.579504  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/id_rsa Username:docker}
	I1119 23:07:39.668947  142733 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I1119 23:07:39.669041  142733 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1119 23:07:39.733896  142733 fix.go:56] duration metric: took 19.795762355s for fixHost
	I1119 23:07:39.737459  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.738018  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:39.738061  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.738362  142733 main.go:143] libmachine: Using SSH client type: native
	I1119 23:07:39.738661  142733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1119 23:07:39.738687  142733 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1119 23:07:39.869213  142733 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763593659.839682658
	
	I1119 23:07:39.869234  142733 fix.go:216] guest clock: 1763593659.839682658
	I1119 23:07:39.869241  142733 fix.go:229] Guest: 2025-11-19 23:07:39.839682658 +0000 UTC Remote: 2025-11-19 23:07:39.733931353 +0000 UTC m=+107.078175487 (delta=105.751305ms)
	I1119 23:07:39.869257  142733 fix.go:200] guest clock delta is within tolerance: 105.751305ms
	I1119 23:07:39.869262  142733 start.go:83] releasing machines lock for "ha-487903-m04", held for 19.931174771s
	I1119 23:07:39.872591  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.873064  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:39.873085  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.875110  142733 out.go:179] * Found network options:
	I1119 23:07:39.876331  142733 out.go:179]   - NO_PROXY=192.168.39.15,192.168.39.191,192.168.39.160
	W1119 23:07:39.877435  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:07:39.877458  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:07:39.877478  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:07:39.877889  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:07:39.877920  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	W1119 23:07:39.877932  142733 proxy.go:120] fail to check proxy env: Error ip not in block
	I1119 23:07:39.877962  142733 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 23:07:39.877987  142733 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 23:07:39.881502  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.881991  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.882088  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:39.882128  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.882283  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/id_rsa Username:docker}
	I1119 23:07:39.882500  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:39.882524  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:39.882696  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/id_rsa Username:docker}
	I1119 23:07:40.118089  142733 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 23:07:40.126955  142733 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 23:07:40.127054  142733 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 23:07:40.150315  142733 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 23:07:40.150351  142733 start.go:496] detecting cgroup driver to use...
	I1119 23:07:40.150436  142733 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 23:07:40.176112  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 23:07:40.195069  142733 docker.go:218] disabling cri-docker service (if available) ...
	I1119 23:07:40.195148  142733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 23:07:40.217113  142733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 23:07:40.240578  142733 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 23:07:40.404108  142733 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 23:07:40.642170  142733 docker.go:234] disabling docker service ...
	I1119 23:07:40.642260  142733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 23:07:40.659709  142733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 23:07:40.677698  142733 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 23:07:40.845769  142733 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 23:07:41.005373  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 23:07:41.028115  142733 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 23:07:41.057337  142733 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 23:07:41.057425  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:41.072373  142733 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 23:07:41.072466  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:41.086681  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:41.100921  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:41.115817  142733 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 23:07:41.132398  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:41.149261  142733 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:41.174410  142733 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:07:41.189666  142733 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 23:07:41.202599  142733 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1119 23:07:41.202679  142733 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1119 23:07:41.228059  142733 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 23:07:41.243031  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:07:41.403712  142733 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 23:07:41.527678  142733 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 23:07:41.527765  142733 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 23:07:41.534539  142733 start.go:564] Will wait 60s for crictl version
	I1119 23:07:41.534620  142733 ssh_runner.go:195] Run: which crictl
	I1119 23:07:41.539532  142733 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1119 23:07:41.585994  142733 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1119 23:07:41.586086  142733 ssh_runner.go:195] Run: crio --version
	I1119 23:07:41.621736  142733 ssh_runner.go:195] Run: crio --version
	I1119 23:07:41.656086  142733 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1119 23:07:41.657482  142733 out.go:179]   - env NO_PROXY=192.168.39.15
	I1119 23:07:41.658756  142733 out.go:179]   - env NO_PROXY=192.168.39.15,192.168.39.191
	I1119 23:07:41.659970  142733 out.go:179]   - env NO_PROXY=192.168.39.15,192.168.39.191,192.168.39.160
	I1119 23:07:41.664105  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:41.664530  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:41.664550  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:41.664716  142733 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1119 23:07:41.670624  142733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:07:41.688618  142733 mustload.go:66] Loading cluster: ha-487903
	I1119 23:07:41.688858  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:41.690292  142733 host.go:66] Checking if "ha-487903" exists ...
	I1119 23:07:41.690482  142733 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903 for IP: 192.168.39.187
	I1119 23:07:41.690491  142733 certs.go:195] generating shared ca certs ...
	I1119 23:07:41.690504  142733 certs.go:227] acquiring lock for ca certs: {Name:mk7fd1f1adfef6333505c39c1a982465562c820e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:07:41.690631  142733 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key
	I1119 23:07:41.690692  142733 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key
	I1119 23:07:41.690711  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1119 23:07:41.690731  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1119 23:07:41.690750  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1119 23:07:41.690768  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1119 23:07:41.690840  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem (1338 bytes)
	W1119 23:07:41.690886  142733 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369_empty.pem, impossibly tiny 0 bytes
	I1119 23:07:41.690897  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 23:07:41.690917  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem (1078 bytes)
	I1119 23:07:41.690937  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem (1123 bytes)
	I1119 23:07:41.690958  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem (1675 bytes)
	I1119 23:07:41.690994  142733 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:07:41.691025  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:07:41.691038  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem -> /usr/share/ca-certificates/121369.pem
	I1119 23:07:41.691048  142733 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> /usr/share/ca-certificates/1213692.pem
	I1119 23:07:41.691068  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 23:07:41.726185  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 23:07:41.762445  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 23:07:41.804578  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 23:07:41.841391  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 23:07:41.881178  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem --> /usr/share/ca-certificates/121369.pem (1338 bytes)
	I1119 23:07:41.917258  142733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /usr/share/ca-certificates/1213692.pem (1708 bytes)
	I1119 23:07:41.953489  142733 ssh_runner.go:195] Run: openssl version
	I1119 23:07:41.961333  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 23:07:41.977066  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:07:41.983550  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:07:41.983610  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:07:41.991656  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 23:07:42.006051  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121369.pem && ln -fs /usr/share/ca-certificates/121369.pem /etc/ssl/certs/121369.pem"
	I1119 23:07:42.021516  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121369.pem
	I1119 23:07:42.028801  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:54 /usr/share/ca-certificates/121369.pem
	I1119 23:07:42.028900  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121369.pem
	I1119 23:07:42.036899  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/121369.pem /etc/ssl/certs/51391683.0"
	I1119 23:07:42.052553  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1213692.pem && ln -fs /usr/share/ca-certificates/1213692.pem /etc/ssl/certs/1213692.pem"
	I1119 23:07:42.067472  142733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1213692.pem
	I1119 23:07:42.073674  142733 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:54 /usr/share/ca-certificates/1213692.pem
	I1119 23:07:42.073751  142733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1213692.pem
	I1119 23:07:42.081607  142733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1213692.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 23:07:42.096183  142733 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 23:07:42.101534  142733 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 23:07:42.101590  142733 kubeadm.go:935] updating node {m04 192.168.39.187 0 v1.34.1 crio false true} ...
	I1119 23:07:42.101683  142733 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-487903-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-487903 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 23:07:42.101762  142733 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 23:07:42.115471  142733 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 23:07:42.115548  142733 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1119 23:07:42.129019  142733 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1119 23:07:42.153030  142733 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 23:07:42.178425  142733 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1119 23:07:42.183443  142733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:07:42.200493  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:07:42.356810  142733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:07:42.394017  142733 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.39.187 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1119 23:07:42.394368  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:42.394458  142733 cache.go:107] acquiring lock: {Name:mk7073b8eca670d2c11dece9275947688dc7c859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 23:07:42.394553  142733 cache.go:115] /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1119 23:07:42.394567  142733 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 116.988ยตs
	I1119 23:07:42.394578  142733 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1119 23:07:42.394596  142733 cache.go:87] Successfully saved all images to host disk.
	I1119 23:07:42.394838  142733 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:07:42.395796  142733 out.go:179] * Verifying Kubernetes components...
	I1119 23:07:42.397077  142733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:07:42.397151  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:07:42.400663  142733 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:07:42.401297  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:05 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 23:07:42.401366  142733 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 23:07:42.401574  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 23:07:42.612769  142733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:07:42.613454  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:07:42.613478  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:07:42.615709  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:07:42.618644  142733 main.go:143] libmachine: domain ha-487903-m02 has defined MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:07:42.619227  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:70", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:23 +0000 UTC Type:0 Mac:52:54:00:04:d5:70 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-487903-m02 Clientid:01:52:54:00:04:d5:70}
	I1119 23:07:42.619265  142733 main.go:143] libmachine: domain ha-487903-m02 has defined IP address 192.168.39.191 and MAC address 52:54:00:04:d5:70 in network mk-ha-487903
	I1119 23:07:42.619437  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m02/id_rsa Username:docker}
	I1119 23:07:42.650578  142733 kapi.go:59] client config for ha-487903: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key", CAFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1119 23:07:42.650662  142733 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.15:8443
	I1119 23:07:42.651008  142733 node_ready.go:35] waiting up to 6m0s for node "ha-487903-m04" to be "Ready" ...
	I1119 23:07:42.759664  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:07:42.759695  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:07:42.762502  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:07:42.766101  142733 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:42.766612  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:06:59 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 23:07:42.766645  142733 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 23:07:42.766903  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/id_rsa Username:docker}
	I1119 23:07:42.916732  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:07:42.916761  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:07:42.919291  142733 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:07:42.922664  142733 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:42.923283  142733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-20 00:07:33 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 23:07:42.923322  142733 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 23:07:42.923548  142733 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/id_rsa Username:docker}
	I1119 23:07:43.068345  142733 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:07:43.068378  142733 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:07:43.068389  142733 cache_images.go:264] succeeded pushing to: ha-487903 ha-487903-m02 ha-487903-m03 ha-487903-m04
	I1119 23:07:43.156120  142733 node_ready.go:49] node "ha-487903-m04" is "Ready"
	I1119 23:07:43.156156  142733 node_ready.go:38] duration metric: took 505.123719ms for node "ha-487903-m04" to be "Ready" ...
	I1119 23:07:43.156173  142733 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 23:07:43.156241  142733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 23:07:43.175222  142733 system_svc.go:56] duration metric: took 19.040723ms WaitForService to wait for kubelet
	I1119 23:07:43.175261  142733 kubeadm.go:587] duration metric: took 781.202644ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 23:07:43.175288  142733 node_conditions.go:102] verifying NodePressure condition ...
	I1119 23:07:43.180835  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:07:43.180870  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:07:43.180910  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:07:43.180916  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:07:43.180924  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:07:43.180942  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:07:43.180953  142733 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:07:43.180959  142733 node_conditions.go:123] node cpu capacity is 2
	I1119 23:07:43.180965  142733 node_conditions.go:105] duration metric: took 5.670636ms to run NodePressure ...
	I1119 23:07:43.180984  142733 start.go:242] waiting for startup goroutines ...
	I1119 23:07:43.181017  142733 start.go:256] writing updated cluster config ...
	I1119 23:07:43.181360  142733 ssh_runner.go:195] Run: rm -f paused
	I1119 23:07:43.187683  142733 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 23:07:43.188308  142733 kapi.go:59] client config for ha-487903: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/ha-487903/client.key", CAFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1119 23:07:43.202770  142733 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5gt2t" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.210054  142733 pod_ready.go:94] pod "coredns-66bc5c9577-5gt2t" is "Ready"
	I1119 23:07:43.210077  142733 pod_ready.go:86] duration metric: took 7.281319ms for pod "coredns-66bc5c9577-5gt2t" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.210085  142733 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zjxkb" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.216456  142733 pod_ready.go:94] pod "coredns-66bc5c9577-zjxkb" is "Ready"
	I1119 23:07:43.216477  142733 pod_ready.go:86] duration metric: took 6.387459ms for pod "coredns-66bc5c9577-zjxkb" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.220711  142733 pod_ready.go:83] waiting for pod "etcd-ha-487903" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.230473  142733 pod_ready.go:94] pod "etcd-ha-487903" is "Ready"
	I1119 23:07:43.230503  142733 pod_ready.go:86] duration metric: took 9.759051ms for pod "etcd-ha-487903" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.230514  142733 pod_ready.go:83] waiting for pod "etcd-ha-487903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.238350  142733 pod_ready.go:94] pod "etcd-ha-487903-m02" is "Ready"
	I1119 23:07:43.238386  142733 pod_ready.go:86] duration metric: took 7.863104ms for pod "etcd-ha-487903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.238400  142733 pod_ready.go:83] waiting for pod "etcd-ha-487903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.389841  142733 request.go:683] "Waited before sending request" delay="151.318256ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-487903-m03"
	I1119 23:07:43.588929  142733 request.go:683] "Waited before sending request" delay="193.203585ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m03"
	I1119 23:07:43.592859  142733 pod_ready.go:94] pod "etcd-ha-487903-m03" is "Ready"
	I1119 23:07:43.592895  142733 pod_ready.go:86] duration metric: took 354.487844ms for pod "etcd-ha-487903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.789462  142733 request.go:683] "Waited before sending request" delay="196.405608ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1119 23:07:43.797307  142733 pod_ready.go:83] waiting for pod "kube-apiserver-ha-487903" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:43.989812  142733 request.go:683] "Waited before sending request" delay="192.389949ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-487903"
	I1119 23:07:44.189117  142733 request.go:683] "Waited before sending request" delay="193.300165ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903"
	I1119 23:07:44.194456  142733 pod_ready.go:94] pod "kube-apiserver-ha-487903" is "Ready"
	I1119 23:07:44.194483  142733 pod_ready.go:86] duration metric: took 397.15415ms for pod "kube-apiserver-ha-487903" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:44.194492  142733 pod_ready.go:83] waiting for pod "kube-apiserver-ha-487903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:44.388959  142733 request.go:683] "Waited before sending request" delay="194.329528ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-487903-m02"
	I1119 23:07:44.589884  142733 request.go:683] "Waited before sending request" delay="195.382546ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m02"
	I1119 23:07:44.596472  142733 pod_ready.go:94] pod "kube-apiserver-ha-487903-m02" is "Ready"
	I1119 23:07:44.596506  142733 pod_ready.go:86] duration metric: took 402.007843ms for pod "kube-apiserver-ha-487903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:44.596519  142733 pod_ready.go:83] waiting for pod "kube-apiserver-ha-487903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:44.788946  142733 request.go:683] "Waited before sending request" delay="192.297042ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-487903-m03"
	I1119 23:07:44.988960  142733 request.go:683] "Waited before sending request" delay="194.310641ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m03"
	I1119 23:07:44.996400  142733 pod_ready.go:94] pod "kube-apiserver-ha-487903-m03" is "Ready"
	I1119 23:07:44.996441  142733 pod_ready.go:86] duration metric: took 399.911723ms for pod "kube-apiserver-ha-487903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:45.188855  142733 request.go:683] "Waited before sending request" delay="192.290488ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1119 23:07:45.196689  142733 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-487903" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:45.389182  142733 request.go:683] "Waited before sending request" delay="192.281881ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-487903"
	I1119 23:07:45.589591  142733 request.go:683] "Waited before sending request" delay="194.384266ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903"
	I1119 23:07:45.595629  142733 pod_ready.go:94] pod "kube-controller-manager-ha-487903" is "Ready"
	I1119 23:07:45.595661  142733 pod_ready.go:86] duration metric: took 398.942038ms for pod "kube-controller-manager-ha-487903" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:45.595674  142733 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-487903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:45.789154  142733 request.go:683] "Waited before sending request" delay="193.378185ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-487903-m02"
	I1119 23:07:45.989593  142733 request.go:683] "Waited before sending request" delay="195.373906ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m02"
	I1119 23:07:45.995418  142733 pod_ready.go:94] pod "kube-controller-manager-ha-487903-m02" is "Ready"
	I1119 23:07:45.995451  142733 pod_ready.go:86] duration metric: took 399.769417ms for pod "kube-controller-manager-ha-487903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:45.995462  142733 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-487903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:46.188855  142733 request.go:683] "Waited before sending request" delay="193.309398ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-487903-m03"
	I1119 23:07:46.389512  142733 request.go:683] "Waited before sending request" delay="194.260664ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m03"
	I1119 23:07:46.394287  142733 pod_ready.go:94] pod "kube-controller-manager-ha-487903-m03" is "Ready"
	I1119 23:07:46.394312  142733 pod_ready.go:86] duration metric: took 398.844264ms for pod "kube-controller-manager-ha-487903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:46.589870  142733 request.go:683] "Waited before sending request" delay="195.416046ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1119 23:07:46.597188  142733 pod_ready.go:83] waiting for pod "kube-proxy-77wjf" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:46.789771  142733 request.go:683] "Waited before sending request" delay="192.426623ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-77wjf"
	I1119 23:07:46.989150  142733 request.go:683] "Waited before sending request" delay="193.435229ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m02"
	I1119 23:07:46.993720  142733 pod_ready.go:94] pod "kube-proxy-77wjf" is "Ready"
	I1119 23:07:46.993753  142733 pod_ready.go:86] duration metric: took 396.52945ms for pod "kube-proxy-77wjf" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:46.993765  142733 pod_ready.go:83] waiting for pod "kube-proxy-fk7mh" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:47.189146  142733 request.go:683] "Waited before sending request" delay="195.267437ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fk7mh"
	I1119 23:07:47.388849  142733 request.go:683] "Waited before sending request" delay="192.29395ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903"
	I1119 23:07:47.395640  142733 pod_ready.go:94] pod "kube-proxy-fk7mh" is "Ready"
	I1119 23:07:47.395670  142733 pod_ready.go:86] duration metric: took 401.897062ms for pod "kube-proxy-fk7mh" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:47.395683  142733 pod_ready.go:83] waiting for pod "kube-proxy-tkx9r" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:47.589099  142733 request.go:683] "Waited before sending request" delay="193.31568ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tkx9r"
	I1119 23:07:47.789418  142733 request.go:683] "Waited before sending request" delay="195.323511ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m03"
	I1119 23:07:47.795048  142733 pod_ready.go:94] pod "kube-proxy-tkx9r" is "Ready"
	I1119 23:07:47.795078  142733 pod_ready.go:86] duration metric: took 399.387799ms for pod "kube-proxy-tkx9r" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:47.795088  142733 pod_ready.go:83] waiting for pod "kube-proxy-zxtk6" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:47.989569  142733 request.go:683] "Waited before sending request" delay="194.336733ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zxtk6"
	I1119 23:07:48.189017  142733 request.go:683] "Waited before sending request" delay="192.313826ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m04"
	I1119 23:07:48.194394  142733 pod_ready.go:94] pod "kube-proxy-zxtk6" is "Ready"
	I1119 23:07:48.194435  142733 pod_ready.go:86] duration metric: took 399.338885ms for pod "kube-proxy-zxtk6" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:48.388945  142733 request.go:683] "Waited before sending request" delay="194.328429ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1119 23:07:48.555571  142733 pod_ready.go:83] waiting for pod "kube-scheduler-ha-487903" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:48.789654  142733 request.go:683] "Waited before sending request" delay="195.382731ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903"
	I1119 23:07:48.795196  142733 pod_ready.go:94] pod "kube-scheduler-ha-487903" is "Ready"
	I1119 23:07:48.795234  142733 pod_ready.go:86] duration metric: took 239.629107ms for pod "kube-scheduler-ha-487903" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:48.795246  142733 pod_ready.go:83] waiting for pod "kube-scheduler-ha-487903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:48.989712  142733 request.go:683] "Waited before sending request" delay="194.356732ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-487903-m02"
	I1119 23:07:49.189524  142733 request.go:683] "Waited before sending request" delay="194.365482ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m02"
	I1119 23:07:49.195480  142733 pod_ready.go:94] pod "kube-scheduler-ha-487903-m02" is "Ready"
	I1119 23:07:49.195503  142733 pod_ready.go:86] duration metric: took 400.248702ms for pod "kube-scheduler-ha-487903-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:49.195512  142733 pod_ready.go:83] waiting for pod "kube-scheduler-ha-487903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:49.388917  142733 request.go:683] "Waited before sending request" delay="193.285895ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-487903-m03"
	I1119 23:07:49.589644  142733 request.go:683] "Waited before sending request" delay="195.362698ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.39.254:8443/api/v1/nodes/ha-487903-m03"
	I1119 23:07:49.594210  142733 pod_ready.go:94] pod "kube-scheduler-ha-487903-m03" is "Ready"
	I1119 23:07:49.594248  142733 pod_ready.go:86] duration metric: took 398.725567ms for pod "kube-scheduler-ha-487903-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:07:49.594266  142733 pod_ready.go:40] duration metric: took 6.406545371s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 23:07:49.639756  142733 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 23:07:49.641778  142733 out.go:179] * Done! kubectl is now configured to use "ha-487903" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 23:09:19 ha-487903 crio[1051]: time="2025-11-19 23:09:19.412423915Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763593759412386223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=56c74361-0e92-4953-ab95-5eaa5e54c2a7 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:09:19 ha-487903 crio[1051]: time="2025-11-19 23:09:19.413228181Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c3d8f70f-2bc0-400d-97f6-f1756241be06 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:09:19 ha-487903 crio[1051]: time="2025-11-19 23:09:19.413374177Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c3d8f70f-2bc0-400d-97f6-f1756241be06 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:09:19 ha-487903 crio[1051]: time="2025-11-19 23:09:19.413852201Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6554703e81880e9f6a21acce786b79533e2a7c53bb8f205f8cf58b3e7f2602bf,PodSandboxId:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763593636083086322,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ecabad51ca157f947db36f5eba7ab52cd91201da10aac9ace5e0c4be262b48,PodSandboxId:270bc5025a2089cafec093032b0ebbddd76667a9be6dc17be0d73dcc82585702,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1763593607117114313,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7b57f96db7-vl8nf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 946ad3f6-2e30-4020-9558-891d0523c640,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4db302f8e1d90f14b4a4931bf9263114ec245c8993010f4eed63ba0b2ff9d17,PodSandboxId:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763593604195293709,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:671e74cfb90edb492f69d2062ab55e74926857b9da6b2bd43dc676ca77a34e95,PodSandboxId:21cea62c9e5ab973727fcb21a109a1a976f2836eed75c3f56119049c06989cb9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,State:CONTAINER_RUNNING,CreatedAt:1763593603580117075,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p9nqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dd7683b-c7e7-487c-904a-506a24f833d8,},Annotations:map[string]string{io.kubernetes.container.hash: 127fdb84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e1ce69b078fd9d69a966ab3ae75b0a9a20f1716235f7fd123429ac0fe73bb0b,PodSandboxId:2d9689b8c4fc51a9b8407de5c6767ecfaee3ba4e33596524328f2bcf5d4107fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763593603411005301,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fk7mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8743ca8a-c5e5-4da6-a983-a6191d2a852a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationM
essagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3b8bef3853f0cd8da58227fc3508657f73899cd5de505b5b41e40a5be41f1f,PodSandboxId:02cf6c2f51b7a2e4710eaf470b1a546e8dfca228fd615b28c173739ebc295404,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763593603688687557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zjxkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics
\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c3e00977ee5764d9f023ed9d6a8cff1ae7f4fd8b5be7f36bf0c296650100b,PodSandboxId:acd42dfb49d39f9eed1c02ddf5511235d87334eb812770e7fb44e4707605eadf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763593603469651002,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5gt2t,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bca7b-6369-4f70-b467-829bf0c07711,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407c1906949db307229dfea169cf035af7ee02aa72d4a48566dbe3b8c43189a6,PodSandboxId:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc2152
7912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763593602977408281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a3ebfa791420fd99d627281216572fc41635c59a2e823681f10e0265a5601af,PodSandboxId:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaa
ff2a321dfb6e26b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763593593634321911,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f74b446d5d8cfbd14bd87db207
415a01d3d5d28a2cb58f4b03cd16328535c63,PodSandboxId:aadb913b7f2aa359387ce202375bc4784c495957a61cd14ac26b05dc78dcf173,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce5ff7916975f8df7d464b4e56a967792442b9e79944fd74684cc749b281df38,State:CONTAINER_RUNNING,CreatedAt:1763593573545391057,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2660d05a38d7f409ed63a1278c85d94,},Annotations:map[string]string{io.kubernetes.container.hash: 7c465d42,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fead33c061a4deb0b1eb4ee9dd3e9e724
dade2871a97a7aad79bef05acbd4a07,PodSandboxId:83240b63d40d6dadf9466b9a99213868b8f23ea48c80e114d09241c058d55ba1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763593571304452021,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03498ab9918acb57128aa1e7f285fe26,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d9fc5b2567d02f5794d850ec1778030b3378c830c952d472803d0582d5904b,PodSandboxId:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763593571289637293,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restar
tCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:361486fad16d169560efd4b5ba18c9754dd222d05d290599faad7b9f7ef4862e,PodSandboxId:2ea97d68a5406f530212285ae86cbed35cbeddb5975f19b02bd01f6965702dba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763593571267392885,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85430ee602aa7edb190bbc4c6f215cf4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"cont
ainerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37548c727f81afd8dd40f5cf29964a61c4e4370cbb603376f96287d018620cb4,PodSandboxId:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaaff2a321dfb6e26b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763593571195978986,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,},Annotations:map[string]string{io.kubernetes.contain
er.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c3d8f70f-2bc0-400d-97f6-f1756241be06 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:09:19 ha-487903 crio[1051]: time="2025-11-19 23:09:19.467548023Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fe22d24c-3eab-4dec-ad18-de0f0e7d6283 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:09:19 ha-487903 crio[1051]: time="2025-11-19 23:09:19.467624585Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fe22d24c-3eab-4dec-ad18-de0f0e7d6283 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:09:19 ha-487903 crio[1051]: time="2025-11-19 23:09:19.469244295Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e76f8694-8501-434f-96f2-83372d688fae name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:09:19 ha-487903 crio[1051]: time="2025-11-19 23:09:19.470145300Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763593759470108320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e76f8694-8501-434f-96f2-83372d688fae name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:09:19 ha-487903 crio[1051]: time="2025-11-19 23:09:19.471497976Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=108de88c-0314-4c29-a7d4-41e02348a5fd name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:09:19 ha-487903 crio[1051]: time="2025-11-19 23:09:19.471567114Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=108de88c-0314-4c29-a7d4-41e02348a5fd name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:09:19 ha-487903 crio[1051]: time="2025-11-19 23:09:19.471960398Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6554703e81880e9f6a21acce786b79533e2a7c53bb8f205f8cf58b3e7f2602bf,PodSandboxId:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763593636083086322,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ecabad51ca157f947db36f5eba7ab52cd91201da10aac9ace5e0c4be262b48,PodSandboxId:270bc5025a2089cafec093032b0ebbddd76667a9be6dc17be0d73dcc82585702,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1763593607117114313,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7b57f96db7-vl8nf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 946ad3f6-2e30-4020-9558-891d0523c640,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4db302f8e1d90f14b4a4931bf9263114ec245c8993010f4eed63ba0b2ff9d17,PodSandboxId:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763593604195293709,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:671e74cfb90edb492f69d2062ab55e74926857b9da6b2bd43dc676ca77a34e95,PodSandboxId:21cea62c9e5ab973727fcb21a109a1a976f2836eed75c3f56119049c06989cb9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,State:CONTAINER_RUNNING,CreatedAt:1763593603580117075,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p9nqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dd7683b-c7e7-487c-904a-506a24f833d8,},Annotations:map[string]string{io.kubernetes.container.hash: 127fdb84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e1ce69b078fd9d69a966ab3ae75b0a9a20f1716235f7fd123429ac0fe73bb0b,PodSandboxId:2d9689b8c4fc51a9b8407de5c6767ecfaee3ba4e33596524328f2bcf5d4107fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763593603411005301,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fk7mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8743ca8a-c5e5-4da6-a983-a6191d2a852a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationM
essagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3b8bef3853f0cd8da58227fc3508657f73899cd5de505b5b41e40a5be41f1f,PodSandboxId:02cf6c2f51b7a2e4710eaf470b1a546e8dfca228fd615b28c173739ebc295404,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763593603688687557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zjxkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics
\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c3e00977ee5764d9f023ed9d6a8cff1ae7f4fd8b5be7f36bf0c296650100b,PodSandboxId:acd42dfb49d39f9eed1c02ddf5511235d87334eb812770e7fb44e4707605eadf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763593603469651002,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5gt2t,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bca7b-6369-4f70-b467-829bf0c07711,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407c1906949db307229dfea169cf035af7ee02aa72d4a48566dbe3b8c43189a6,PodSandboxId:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc2152
7912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763593602977408281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a3ebfa791420fd99d627281216572fc41635c59a2e823681f10e0265a5601af,PodSandboxId:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaa
ff2a321dfb6e26b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763593593634321911,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f74b446d5d8cfbd14bd87db207
415a01d3d5d28a2cb58f4b03cd16328535c63,PodSandboxId:aadb913b7f2aa359387ce202375bc4784c495957a61cd14ac26b05dc78dcf173,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce5ff7916975f8df7d464b4e56a967792442b9e79944fd74684cc749b281df38,State:CONTAINER_RUNNING,CreatedAt:1763593573545391057,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2660d05a38d7f409ed63a1278c85d94,},Annotations:map[string]string{io.kubernetes.container.hash: 7c465d42,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fead33c061a4deb0b1eb4ee9dd3e9e724
dade2871a97a7aad79bef05acbd4a07,PodSandboxId:83240b63d40d6dadf9466b9a99213868b8f23ea48c80e114d09241c058d55ba1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763593571304452021,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03498ab9918acb57128aa1e7f285fe26,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d9fc5b2567d02f5794d850ec1778030b3378c830c952d472803d0582d5904b,PodSandboxId:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763593571289637293,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restar
tCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:361486fad16d169560efd4b5ba18c9754dd222d05d290599faad7b9f7ef4862e,PodSandboxId:2ea97d68a5406f530212285ae86cbed35cbeddb5975f19b02bd01f6965702dba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763593571267392885,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85430ee602aa7edb190bbc4c6f215cf4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"cont
ainerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37548c727f81afd8dd40f5cf29964a61c4e4370cbb603376f96287d018620cb4,PodSandboxId:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaaff2a321dfb6e26b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763593571195978986,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,},Annotations:map[string]string{io.kubernetes.contain
er.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=108de88c-0314-4c29-a7d4-41e02348a5fd name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:09:19 ha-487903 crio[1051]: time="2025-11-19 23:09:19.517264195Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c8ab2dc7-c44d-4ae9-93f1-152c39078165 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:09:19 ha-487903 crio[1051]: time="2025-11-19 23:09:19.517338443Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c8ab2dc7-c44d-4ae9-93f1-152c39078165 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:09:19 ha-487903 crio[1051]: time="2025-11-19 23:09:19.518586844Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1cc06dee-ae5a-4a28-8aee-ec994cd4e1c1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:09:19 ha-487903 crio[1051]: time="2025-11-19 23:09:19.519273288Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763593759519246704,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1cc06dee-ae5a-4a28-8aee-ec994cd4e1c1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:09:19 ha-487903 crio[1051]: time="2025-11-19 23:09:19.519965590Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ffb9f83-729a-4c46-b86a-2f884a9a0da6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:09:19 ha-487903 crio[1051]: time="2025-11-19 23:09:19.520029103Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ffb9f83-729a-4c46-b86a-2f884a9a0da6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:09:19 ha-487903 crio[1051]: time="2025-11-19 23:09:19.520433019Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6554703e81880e9f6a21acce786b79533e2a7c53bb8f205f8cf58b3e7f2602bf,PodSandboxId:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763593636083086322,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ecabad51ca157f947db36f5eba7ab52cd91201da10aac9ace5e0c4be262b48,PodSandboxId:270bc5025a2089cafec093032b0ebbddd76667a9be6dc17be0d73dcc82585702,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1763593607117114313,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7b57f96db7-vl8nf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 946ad3f6-2e30-4020-9558-891d0523c640,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4db302f8e1d90f14b4a4931bf9263114ec245c8993010f4eed63ba0b2ff9d17,PodSandboxId:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763593604195293709,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:671e74cfb90edb492f69d2062ab55e74926857b9da6b2bd43dc676ca77a34e95,PodSandboxId:21cea62c9e5ab973727fcb21a109a1a976f2836eed75c3f56119049c06989cb9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,State:CONTAINER_RUNNING,CreatedAt:1763593603580117075,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p9nqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dd7683b-c7e7-487c-904a-506a24f833d8,},Annotations:map[string]string{io.kubernetes.container.hash: 127fdb84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e1ce69b078fd9d69a966ab3ae75b0a9a20f1716235f7fd123429ac0fe73bb0b,PodSandboxId:2d9689b8c4fc51a9b8407de5c6767ecfaee3ba4e33596524328f2bcf5d4107fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763593603411005301,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fk7mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8743ca8a-c5e5-4da6-a983-a6191d2a852a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationM
essagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3b8bef3853f0cd8da58227fc3508657f73899cd5de505b5b41e40a5be41f1f,PodSandboxId:02cf6c2f51b7a2e4710eaf470b1a546e8dfca228fd615b28c173739ebc295404,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763593603688687557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zjxkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics
\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c3e00977ee5764d9f023ed9d6a8cff1ae7f4fd8b5be7f36bf0c296650100b,PodSandboxId:acd42dfb49d39f9eed1c02ddf5511235d87334eb812770e7fb44e4707605eadf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763593603469651002,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5gt2t,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bca7b-6369-4f70-b467-829bf0c07711,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407c1906949db307229dfea169cf035af7ee02aa72d4a48566dbe3b8c43189a6,PodSandboxId:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc2152
7912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763593602977408281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a3ebfa791420fd99d627281216572fc41635c59a2e823681f10e0265a5601af,PodSandboxId:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaa
ff2a321dfb6e26b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763593593634321911,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f74b446d5d8cfbd14bd87db207
415a01d3d5d28a2cb58f4b03cd16328535c63,PodSandboxId:aadb913b7f2aa359387ce202375bc4784c495957a61cd14ac26b05dc78dcf173,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce5ff7916975f8df7d464b4e56a967792442b9e79944fd74684cc749b281df38,State:CONTAINER_RUNNING,CreatedAt:1763593573545391057,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2660d05a38d7f409ed63a1278c85d94,},Annotations:map[string]string{io.kubernetes.container.hash: 7c465d42,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fead33c061a4deb0b1eb4ee9dd3e9e724
dade2871a97a7aad79bef05acbd4a07,PodSandboxId:83240b63d40d6dadf9466b9a99213868b8f23ea48c80e114d09241c058d55ba1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763593571304452021,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03498ab9918acb57128aa1e7f285fe26,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d9fc5b2567d02f5794d850ec1778030b3378c830c952d472803d0582d5904b,PodSandboxId:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763593571289637293,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restar
tCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:361486fad16d169560efd4b5ba18c9754dd222d05d290599faad7b9f7ef4862e,PodSandboxId:2ea97d68a5406f530212285ae86cbed35cbeddb5975f19b02bd01f6965702dba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763593571267392885,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85430ee602aa7edb190bbc4c6f215cf4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"cont
ainerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37548c727f81afd8dd40f5cf29964a61c4e4370cbb603376f96287d018620cb4,PodSandboxId:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaaff2a321dfb6e26b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763593571195978986,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,},Annotations:map[string]string{io.kubernetes.contain
er.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ffb9f83-729a-4c46-b86a-2f884a9a0da6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:09:19 ha-487903 crio[1051]: time="2025-11-19 23:09:19.573921214Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=25032201-d9c0-47a6-bc89-e1ec620661a2 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:09:19 ha-487903 crio[1051]: time="2025-11-19 23:09:19.574000783Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=25032201-d9c0-47a6-bc89-e1ec620661a2 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:09:19 ha-487903 crio[1051]: time="2025-11-19 23:09:19.576276765Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8840d462-626d-4d0f-b096-e6d4ab18aeab name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:09:19 ha-487903 crio[1051]: time="2025-11-19 23:09:19.577529778Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763593759577498787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8840d462-626d-4d0f-b096-e6d4ab18aeab name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:09:19 ha-487903 crio[1051]: time="2025-11-19 23:09:19.580102721Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6294837a-cd55-4667-aa99-382409f6bc2e name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:09:19 ha-487903 crio[1051]: time="2025-11-19 23:09:19.580179085Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6294837a-cd55-4667-aa99-382409f6bc2e name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:09:19 ha-487903 crio[1051]: time="2025-11-19 23:09:19.580615686Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6554703e81880e9f6a21acce786b79533e2a7c53bb8f205f8cf58b3e7f2602bf,PodSandboxId:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763593636083086322,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ecabad51ca157f947db36f5eba7ab52cd91201da10aac9ace5e0c4be262b48,PodSandboxId:270bc5025a2089cafec093032b0ebbddd76667a9be6dc17be0d73dcc82585702,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1763593607117114313,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7b57f96db7-vl8nf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 946ad3f6-2e30-4020-9558-891d0523c640,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4db302f8e1d90f14b4a4931bf9263114ec245c8993010f4eed63ba0b2ff9d17,PodSandboxId:bcf53581b6e1f1d56c91f5f3d17b42ac1b61baa226601287d8093a657a13d330,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763593604195293709,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8e8800-093c-4c68-ac9c-300049543509,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:671e74cfb90edb492f69d2062ab55e74926857b9da6b2bd43dc676ca77a34e95,PodSandboxId:21cea62c9e5ab973727fcb21a109a1a976f2836eed75c3f56119049c06989cb9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,State:CONTAINER_RUNNING,CreatedAt:1763593603580117075,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-p9nqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dd7683b-c7e7-487c-904a-506a24f833d8,},Annotations:map[string]string{io.kubernetes.container.hash: 127fdb84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e1ce69b078fd9d69a966ab3ae75b0a9a20f1716235f7fd123429ac0fe73bb0b,PodSandboxId:2d9689b8c4fc51a9b8407de5c6767ecfaee3ba4e33596524328f2bcf5d4107fc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763593603411005301,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fk7mh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8743ca8a-c5e5-4da6-a983-a6191d2a852a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationM
essagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3b8bef3853f0cd8da58227fc3508657f73899cd5de505b5b41e40a5be41f1f,PodSandboxId:02cf6c2f51b7a2e4710eaf470b1a546e8dfca228fd615b28c173739ebc295404,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763593603688687557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zjxkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa8ef68-57ac-42d5-a83d-8d2cb3f6a6b5,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics
\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c3e00977ee5764d9f023ed9d6a8cff1ae7f4fd8b5be7f36bf0c296650100b,PodSandboxId:acd42dfb49d39f9eed1c02ddf5511235d87334eb812770e7fb44e4707605eadf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763593603469651002,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5gt2t,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a5bca7b-6369-4f70-b467-829bf0c07711,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407c1906949db307229dfea169cf035af7ee02aa72d4a48566dbe3b8c43189a6,PodSandboxId:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc2152
7912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763593602977408281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a3ebfa791420fd99d627281216572fc41635c59a2e823681f10e0265a5601af,PodSandboxId:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaa
ff2a321dfb6e26b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763593593634321911,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f74b446d5d8cfbd14bd87db207
415a01d3d5d28a2cb58f4b03cd16328535c63,PodSandboxId:aadb913b7f2aa359387ce202375bc4784c495957a61cd14ac26b05dc78dcf173,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce5ff7916975f8df7d464b4e56a967792442b9e79944fd74684cc749b281df38,State:CONTAINER_RUNNING,CreatedAt:1763593573545391057,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2660d05a38d7f409ed63a1278c85d94,},Annotations:map[string]string{io.kubernetes.container.hash: 7c465d42,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fead33c061a4deb0b1eb4ee9dd3e9e724
dade2871a97a7aad79bef05acbd4a07,PodSandboxId:83240b63d40d6dadf9466b9a99213868b8f23ea48c80e114d09241c058d55ba1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763593571304452021,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03498ab9918acb57128aa1e7f285fe26,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d9fc5b2567d02f5794d850ec1778030b3378c830c952d472803d0582d5904b,PodSandboxId:a4df466e854f6f206058fd4be7b2638dce53bead5212e8dd0e6fd550050ca683,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763593571289637293,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da83ad8c68cff2289fa7c146858b394c,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restar
tCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:361486fad16d169560efd4b5ba18c9754dd222d05d290599faad7b9f7ef4862e,PodSandboxId:2ea97d68a5406f530212285ae86cbed35cbeddb5975f19b02bd01f6965702dba,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763593571267392885,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85430ee602aa7edb190bbc4c6f215cf4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"cont
ainerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37548c727f81afd8dd40f5cf29964a61c4e4370cbb603376f96287d018620cb4,PodSandboxId:6d84027fd8d6f7c339ab4b9418a22d8466c3533a733fe4eaaff2a321dfb6e26b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763593571195978986,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-487903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f157a6337bb1e4494d2f66a12bd99f7,},Annotations:map[string]string{io.kubernetes.contain
er.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6294837a-cd55-4667-aa99-382409f6bc2e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6554703e81880       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago       Running             storage-provisioner       4                   bcf53581b6e1f       storage-provisioner
	08ecabad51ca1       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   2 minutes ago       Running             busybox                   1                   270bc5025a208       busybox-7b57f96db7-vl8nf
	f4db302f8e1d9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago       Exited              storage-provisioner       3                   bcf53581b6e1f       storage-provisioner
	cf3b8bef3853f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      2 minutes ago       Running             coredns                   1                   02cf6c2f51b7a       coredns-66bc5c9577-zjxkb
	671e74cfb90ed       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      2 minutes ago       Running             kindnet-cni               1                   21cea62c9e5ab       kindnet-p9nqh
	323c3e00977ee       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      2 minutes ago       Running             coredns                   1                   acd42dfb49d39       coredns-66bc5c9577-5gt2t
	8e1ce69b078fd       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      2 minutes ago       Running             kube-proxy                1                   2d9689b8c4fc5       kube-proxy-fk7mh
	407c1906949db       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      2 minutes ago       Running             kube-controller-manager   2                   a4df466e854f6       kube-controller-manager-ha-487903
	0a3ebfa791420       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      2 minutes ago       Running             kube-apiserver            2                   6d84027fd8d6f       kube-apiserver-ha-487903
	9f74b446d5d8c       ghcr.io/kube-vip/kube-vip@sha256:554d1e07ee24a046bbc7fba67f438c01b480b072c6f0b99215321fc0eb440178     3 minutes ago       Running             kube-vip                  1                   aadb913b7f2aa       kube-vip-ha-487903
	fead33c061a4d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      3 minutes ago       Running             kube-scheduler            1                   83240b63d40d6       kube-scheduler-ha-487903
	b7d9fc5b2567d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      3 minutes ago       Exited              kube-controller-manager   1                   a4df466e854f6       kube-controller-manager-ha-487903
	361486fad16d1       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      3 minutes ago       Running             etcd                      1                   2ea97d68a5406       etcd-ha-487903
	37548c727f81a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      3 minutes ago       Exited              kube-apiserver            1                   6d84027fd8d6f       kube-apiserver-ha-487903
	
	
	==> coredns [323c3e00977ee5764d9f023ed9d6a8cff1ae7f4fd8b5be7f36bf0c296650100b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51076 - 6967 "HINFO IN 7389388171048239250.1605567939079731882. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.415536075s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [cf3b8bef3853f0cd8da58227fc3508657f73899cd5de505b5b41e40a5be41f1f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44386 - 47339 "HINFO IN 5025386377785033151.6368126768169479003. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.417913634s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-487903
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-487903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=ha-487903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_48_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:47:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-487903
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 23:09:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 23:07:12 +0000   Wed, 19 Nov 2025 22:47:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 23:07:12 +0000   Wed, 19 Nov 2025 22:47:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 23:07:12 +0000   Wed, 19 Nov 2025 22:47:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 23:07:12 +0000   Wed, 19 Nov 2025 22:48:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.15
	  Hostname:    ha-487903
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 a1ad91e99cee4f2a89ceda034e4410c0
	  System UUID:                a1ad91e9-9cee-4f2a-89ce-da034e4410c0
	  Boot ID:                    1b20db97-3ea3-483b-aa28-0753781928f2
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-vl8nf             0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 coredns-66bc5c9577-5gt2t             100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     21m
	  kube-system                 coredns-66bc5c9577-zjxkb             100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     21m
	  kube-system                 etcd-ha-487903                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         21m
	  kube-system                 kindnet-p9nqh                        100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      21m
	  kube-system                 kube-apiserver-ha-487903             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-ha-487903    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-fk7mh                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-ha-487903             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-vip-ha-487903                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (9%)  390Mi (13%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 21m                  kube-proxy       
	  Normal   Starting                 2m34s                kube-proxy       
	  Normal   NodeAllocatableEnforced  21m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 21m                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  21m (x8 over 21m)    kubelet          Node ha-487903 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21m (x8 over 21m)    kubelet          Node ha-487903 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21m (x7 over 21m)    kubelet          Node ha-487903 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 21m                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     21m                  kubelet          Node ha-487903 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  21m                  kubelet          Node ha-487903 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21m                  kubelet          Node ha-487903 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           21m                  node-controller  Node ha-487903 event: Registered Node ha-487903 in Controller
	  Normal   NodeReady                21m                  kubelet          Node ha-487903 status is now: NodeReady
	  Normal   RegisteredNode           20m                  node-controller  Node ha-487903 event: Registered Node ha-487903 in Controller
	  Normal   RegisteredNode           18m                  node-controller  Node ha-487903 event: Registered Node ha-487903 in Controller
	  Normal   RegisteredNode           14m                  node-controller  Node ha-487903 event: Registered Node ha-487903 in Controller
	  Normal   NodeHasSufficientPID     3m9s (x7 over 3m9s)  kubelet          Node ha-487903 status is now: NodeHasSufficientPID
	  Normal   Starting                 3m9s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  3m9s (x8 over 3m9s)  kubelet          Node ha-487903 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m9s (x8 over 3m9s)  kubelet          Node ha-487903 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  3m9s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 2m38s                kubelet          Node ha-487903 has been rebooted, boot id: 1b20db97-3ea3-483b-aa28-0753781928f2
	  Normal   RegisteredNode           2m32s                node-controller  Node ha-487903 event: Registered Node ha-487903 in Controller
	  Normal   RegisteredNode           2m30s                node-controller  Node ha-487903 event: Registered Node ha-487903 in Controller
	  Normal   RegisteredNode           114s                 node-controller  Node ha-487903 event: Registered Node ha-487903 in Controller
	  Normal   RegisteredNode           24s                  node-controller  Node ha-487903 event: Registered Node ha-487903 in Controller
	
	
	Name:               ha-487903-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-487903-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=ha-487903
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_19T22_49_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:49:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-487903-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 23:09:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 23:07:06 +0000   Wed, 19 Nov 2025 22:54:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 23:07:06 +0000   Wed, 19 Nov 2025 22:54:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 23:07:06 +0000   Wed, 19 Nov 2025 22:54:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 23:07:06 +0000   Wed, 19 Nov 2025 22:54:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.191
	  Hostname:    ha-487903-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 dcc51fc7a2ff40ae988dda36299d6bbc
	  System UUID:                dcc51fc7-a2ff-40ae-988d-da36299d6bbc
	  Boot ID:                    6ad68891-6365-45be-8b40-3a4d3c73c34d
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-xjvfn                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 etcd-ha-487903-m02                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         20m
	  kube-system                 kindnet-9zx8x                            100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      20m
	  kube-system                 kube-apiserver-ha-487903-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-ha-487903-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-77wjf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-ha-487903-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-vip-ha-487903-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (5%)  50Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m26s                  kube-proxy       
	  Normal   Starting                 19m                    kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   RegisteredNode           20m                    node-controller  Node ha-487903-m02 event: Registered Node ha-487903-m02 in Controller
	  Normal   RegisteredNode           20m                    node-controller  Node ha-487903-m02 event: Registered Node ha-487903-m02 in Controller
	  Normal   RegisteredNode           18m                    node-controller  Node ha-487903-m02 event: Registered Node ha-487903-m02 in Controller
	  Normal   NodeNotReady             16m                    node-controller  Node ha-487903-m02 status is now: NodeNotReady
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-487903-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-487903-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-487903-m02 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 14m                    kubelet          Node ha-487903-m02 has been rebooted, boot id: e9c055dc-1db9-46bb-aebb-1872d4771aa9
	  Normal   RegisteredNode           14m                    node-controller  Node ha-487903-m02 event: Registered Node ha-487903-m02 in Controller
	  Normal   NodeHasSufficientMemory  2m48s (x8 over 2m48s)  kubelet          Node ha-487903-m02 status is now: NodeHasSufficientMemory
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m48s (x8 over 2m48s)  kubelet          Node ha-487903-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x7 over 2m48s)  kubelet          Node ha-487903-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 2m34s                  kubelet          Node ha-487903-m02 has been rebooted, boot id: 6ad68891-6365-45be-8b40-3a4d3c73c34d
	  Normal   RegisteredNode           2m33s                  node-controller  Node ha-487903-m02 event: Registered Node ha-487903-m02 in Controller
	  Normal   RegisteredNode           2m31s                  node-controller  Node ha-487903-m02 event: Registered Node ha-487903-m02 in Controller
	  Normal   RegisteredNode           115s                   node-controller  Node ha-487903-m02 event: Registered Node ha-487903-m02 in Controller
	  Normal   RegisteredNode           25s                    node-controller  Node ha-487903-m02 event: Registered Node ha-487903-m02 in Controller
	
	
	Name:               ha-487903-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-487903-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=ha-487903
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_19T22_50_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:50:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-487903-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 23:09:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 23:07:40 +0000   Wed, 19 Nov 2025 23:07:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 23:07:40 +0000   Wed, 19 Nov 2025 23:07:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 23:07:40 +0000   Wed, 19 Nov 2025 23:07:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 23:07:40 +0000   Wed, 19 Nov 2025 23:07:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.160
	  Hostname:    ha-487903-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 e9ddbb3bf8b54cd48c27cb1452f23fd2
	  System UUID:                e9ddbb3b-f8b5-4cd4-8c27-cb1452f23fd2
	  Boot ID:                    ebee6c5a-099c-4845-b6bc-e5686cb73f0c
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-6q5gq                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 etcd-ha-487903-m03                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         18m
	  kube-system                 kindnet-kslhw                            100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      18m
	  kube-system                 kube-apiserver-ha-487903-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-ha-487903-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-tkx9r                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-ha-487903-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-vip-ha-487903-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (5%)  50Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 18m                    kube-proxy       
	  Normal   Starting                 111s                   kube-proxy       
	  Normal   RegisteredNode           18m                    node-controller  Node ha-487903-m03 event: Registered Node ha-487903-m03 in Controller
	  Normal   RegisteredNode           18m                    node-controller  Node ha-487903-m03 event: Registered Node ha-487903-m03 in Controller
	  Normal   RegisteredNode           18m                    node-controller  Node ha-487903-m03 event: Registered Node ha-487903-m03 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-487903-m03 event: Registered Node ha-487903-m03 in Controller
	  Normal   NodeNotReady             13m                    node-controller  Node ha-487903-m03 status is now: NodeNotReady
	  Normal   RegisteredNode           2m33s                  node-controller  Node ha-487903-m03 event: Registered Node ha-487903-m03 in Controller
	  Normal   RegisteredNode           2m31s                  node-controller  Node ha-487903-m03 event: Registered Node ha-487903-m03 in Controller
	  Normal   NodeHasNoDiskPressure    2m13s (x8 over 2m13s)  kubelet          Node ha-487903-m03 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m13s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m13s (x8 over 2m13s)  kubelet          Node ha-487903-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m13s (x7 over 2m13s)  kubelet          Node ha-487903-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  2m13s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 2m1s                   kubelet          Node ha-487903-m03 has been rebooted, boot id: ebee6c5a-099c-4845-b6bc-e5686cb73f0c
	  Normal   RegisteredNode           115s                   node-controller  Node ha-487903-m03 event: Registered Node ha-487903-m03 in Controller
	  Normal   RegisteredNode           25s                    node-controller  Node ha-487903-m03 event: Registered Node ha-487903-m03 in Controller
	
	
	Name:               ha-487903-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-487903-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=ha-487903
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_19T22_51_56_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:51:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-487903-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 23:09:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 23:08:13 +0000   Wed, 19 Nov 2025 23:07:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 23:08:13 +0000   Wed, 19 Nov 2025 23:07:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 23:08:13 +0000   Wed, 19 Nov 2025 23:07:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 23:08:13 +0000   Wed, 19 Nov 2025 23:07:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.187
	  Hostname:    ha-487903-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 2ce148a1b98246f6ada06a5a5b14ddce
	  System UUID:                2ce148a1-b982-46f6-ada0-6a5a5b14ddce
	  Boot ID:                    7878c528-f6af-4234-946e-b1c55c0ff956
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-s9k2l       100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      17m
	  kube-system                 kube-proxy-zxtk6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (1%)  50Mi (1%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 93s                kube-proxy       
	  Normal   Starting                 17m                kube-proxy       
	  Normal   NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     17m (x3 over 17m)  kubelet          Node ha-487903-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    17m (x3 over 17m)  kubelet          Node ha-487903-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  17m (x3 over 17m)  kubelet          Node ha-487903-m04 status is now: NodeHasSufficientMemory
	  Normal   Starting                 17m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           17m                node-controller  Node ha-487903-m04 event: Registered Node ha-487903-m04 in Controller
	  Normal   RegisteredNode           17m                node-controller  Node ha-487903-m04 event: Registered Node ha-487903-m04 in Controller
	  Normal   RegisteredNode           17m                node-controller  Node ha-487903-m04 event: Registered Node ha-487903-m04 in Controller
	  Normal   NodeReady                17m                kubelet          Node ha-487903-m04 status is now: NodeReady
	  Normal   RegisteredNode           14m                node-controller  Node ha-487903-m04 event: Registered Node ha-487903-m04 in Controller
	  Normal   NodeNotReady             13m                node-controller  Node ha-487903-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           2m33s              node-controller  Node ha-487903-m04 event: Registered Node ha-487903-m04 in Controller
	  Normal   RegisteredNode           2m31s              node-controller  Node ha-487903-m04 event: Registered Node ha-487903-m04 in Controller
	  Normal   RegisteredNode           115s               node-controller  Node ha-487903-m04 event: Registered Node ha-487903-m04 in Controller
	  Normal   Starting                 98s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  98s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 98s                kubelet          Node ha-487903-m04 has been rebooted, boot id: 7878c528-f6af-4234-946e-b1c55c0ff956
	  Normal   NodeHasSufficientMemory  98s (x4 over 98s)  kubelet          Node ha-487903-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    98s (x4 over 98s)  kubelet          Node ha-487903-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     98s (x4 over 98s)  kubelet          Node ha-487903-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             98s                kubelet          Node ha-487903-m04 status is now: NodeNotReady
	  Normal   NodeReady                98s (x2 over 98s)  kubelet          Node ha-487903-m04 status is now: NodeReady
	  Normal   RegisteredNode           25s                node-controller  Node ha-487903-m04 event: Registered Node ha-487903-m04 in Controller
	
	
	Name:               ha-487903-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-487903-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=ha-487903
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_19T23_08_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 23:08:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-487903-m05
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 23:09:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 23:09:14 +0000   Wed, 19 Nov 2025 23:08:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 23:09:14 +0000   Wed, 19 Nov 2025 23:08:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 23:09:14 +0000   Wed, 19 Nov 2025 23:08:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 23:09:14 +0000   Wed, 19 Nov 2025 23:09:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.250
	  Hostname:    ha-487903-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 32c4de896a484581875fb6870ed3d42e
	  System UUID:                32c4de89-6a48-4581-875f-b6870ed3d42e
	  Boot ID:                    38df3e3e-0f55-4608-9956-93b000886dcc
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-487903-m05                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         22s
	  kube-system                 kindnet-4pp9j                            100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      23s
	  kube-system                 kube-apiserver-ha-487903-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19s
	  kube-system                 kube-controller-manager-ha-487903-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-proxy-c5cxl                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-ha-487903-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22s
	  kube-system                 kube-vip-ha-487903-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (5%)  50Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        20s   kube-proxy       
	  Normal  RegisteredNode  23s   node-controller  Node ha-487903-m05 event: Registered Node ha-487903-m05 in Controller
	  Normal  RegisteredNode  21s   node-controller  Node ha-487903-m05 event: Registered Node ha-487903-m05 in Controller
	  Normal  RegisteredNode  20s   node-controller  Node ha-487903-m05 event: Registered Node ha-487903-m05 in Controller
	  Normal  RegisteredNode  20s   node-controller  Node ha-487903-m05 event: Registered Node ha-487903-m05 in Controller
	
	
	==> dmesg <==
	[Nov19 23:05] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[Nov19 23:06] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000639] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.971469] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.112003] kauditd_printk_skb: 93 callbacks suppressed
	[ +23.563071] kauditd_printk_skb: 193 callbacks suppressed
	[  +9.425091] kauditd_printk_skb: 6 callbacks suppressed
	[  +3.746118] kauditd_printk_skb: 281 callbacks suppressed
	[Nov19 23:07] kauditd_printk_skb: 11 callbacks suppressed
	[Nov19 23:08] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [361486fad16d169560efd4b5ba18c9754dd222d05d290599faad7b9f7ef4862e] <==
	{"level":"error","ts":"2025-11-19T23:08:44.091154Z","caller":"etcdserver/server.go:1585","msg":"rejecting promote learner: learner is not ready","learner-ready-percent":0,"ready-percent-threshold":0.9,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).isLearnerReady\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1585\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).mayPromoteMember\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1526\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).promoteMember\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1498\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).PromoteMember\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1450\ngo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*peerMemberPromoteHandler).ServeHTTP\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/peer.go:140\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2747\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:3210\nnet/http.(*conn).serve\n\tnet/http/ser
ver.go:2092"}
	{"level":"warn","ts":"2025-11-19T23:08:44.091244Z","caller":"etcdhttp/peer.go:152","msg":"failed to promote a member","member-id":"bff327183f50f5b5","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2025-11-19T23:08:44.213555Z","caller":"rafthttp/snapshot_sender.go:131","msg":"sent database snapshot","snapshot-index":3364,"remote-peer-id":"bff327183f50f5b5","bytes":5584801,"size":"5.6 MB"}
	{"level":"warn","ts":"2025-11-19T23:08:44.485302Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aadd773bb1fe5a6f","remote-peer-id":"bff327183f50f5b5","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:08:44.496447Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aadd773bb1fe5a6f","remote-peer-id":"bff327183f50f5b5","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T23:08:44.566259Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"bff327183f50f5b5","error":"failed to write bff327183f50f5b5 on stream MsgApp v2 (write tcp 192.168.39.15:2380->192.168.39.250:54272: write: broken pipe)"}
	{"level":"warn","ts":"2025-11-19T23:08:44.567562Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aadd773bb1fe5a6f","remote-peer-id":"bff327183f50f5b5"}
	{"level":"info","ts":"2025-11-19T23:08:44.587171Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aadd773bb1fe5a6f switched to configuration voters=(83930990489806575 5236735666982451297 12312128054573816431 13831441865679893941)"}
	{"level":"info","ts":"2025-11-19T23:08:44.587459Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"546e0a293cd37a14","local-member-id":"aadd773bb1fe5a6f","promoted-member-id":"bff327183f50f5b5"}
	{"level":"info","ts":"2025-11-19T23:08:44.587540Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aadd773bb1fe5a6f","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"bff327183f50f5b5"}
	{"level":"info","ts":"2025-11-19T23:08:44.698614Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"bff327183f50f5b5"}
	{"level":"info","ts":"2025-11-19T23:08:44.698690Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aadd773bb1fe5a6f","remote-peer-id":"bff327183f50f5b5"}
	{"level":"warn","ts":"2025-11-19T23:08:44.734058Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"bff327183f50f5b5","error":"failed to write bff327183f50f5b5 on stream Message (write tcp 192.168.39.15:2380->192.168.39.250:54284: write: broken pipe)"}
	{"level":"warn","ts":"2025-11-19T23:08:44.734978Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aadd773bb1fe5a6f","remote-peer-id":"bff327183f50f5b5"}
	{"level":"info","ts":"2025-11-19T23:08:44.776897Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aadd773bb1fe5a6f","to":"bff327183f50f5b5","stream-type":"stream Message"}
	{"level":"info","ts":"2025-11-19T23:08:44.776951Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"bff327183f50f5b5"}
	{"level":"info","ts":"2025-11-19T23:08:44.776969Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aadd773bb1fe5a6f","remote-peer-id":"bff327183f50f5b5"}
	{"level":"info","ts":"2025-11-19T23:08:44.779207Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aadd773bb1fe5a6f","remote-peer-id":"bff327183f50f5b5"}
	{"level":"info","ts":"2025-11-19T23:08:44.781055Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aadd773bb1fe5a6f","to":"bff327183f50f5b5","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-11-19T23:08:44.781084Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aadd773bb1fe5a6f","remote-peer-id":"bff327183f50f5b5"}
	{"level":"info","ts":"2025-11-19T23:08:50.523545Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-11-19T23:08:57.744664Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-11-19T23:08:58.669455Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-11-19T23:09:11.890391Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-11-19T23:09:14.213806Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aadd773bb1fe5a6f","to":"bff327183f50f5b5","bytes":5584801,"size":"5.6 MB","took":"30.187406936s"}
	
	
	==> kernel <==
	 23:09:20 up 3 min,  0 users,  load average: 0.41, 0.28, 0.12
	Linux ha-487903 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 21:15:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kindnet [671e74cfb90edb492f69d2062ab55e74926857b9da6b2bd43dc676ca77a34e95] <==
	I1119 23:08:55.551136       1 main.go:297] Handling node with IPs: map[192.168.39.160:{}]
	I1119 23:08:55.551164       1 main.go:324] Node ha-487903-m03 has CIDR [10.244.2.0/24] 
	I1119 23:08:55.551384       1 main.go:297] Handling node with IPs: map[192.168.39.187:{}]
	I1119 23:08:55.551391       1 main.go:324] Node ha-487903-m04 has CIDR [10.244.3.0/24] 
	I1119 23:09:05.550320       1 main.go:297] Handling node with IPs: map[192.168.39.160:{}]
	I1119 23:09:05.550468       1 main.go:324] Node ha-487903-m03 has CIDR [10.244.2.0/24] 
	I1119 23:09:05.551072       1 main.go:297] Handling node with IPs: map[192.168.39.187:{}]
	I1119 23:09:05.551088       1 main.go:324] Node ha-487903-m04 has CIDR [10.244.3.0/24] 
	I1119 23:09:05.551228       1 main.go:297] Handling node with IPs: map[192.168.39.250:{}]
	I1119 23:09:05.551234       1 main.go:324] Node ha-487903-m05 has CIDR [10.244.4.0/24] 
	I1119 23:09:05.551333       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.4.0/24 Src: <nil> Gw: 192.168.39.250 Flags: [] Table: 0 Realm: 0} 
	I1119 23:09:05.552492       1 main.go:297] Handling node with IPs: map[192.168.39.15:{}]
	I1119 23:09:05.552507       1 main.go:301] handling current node
	I1119 23:09:05.552583       1 main.go:297] Handling node with IPs: map[192.168.39.191:{}]
	I1119 23:09:05.552588       1 main.go:324] Node ha-487903-m02 has CIDR [10.244.1.0/24] 
	I1119 23:09:15.550480       1 main.go:297] Handling node with IPs: map[192.168.39.191:{}]
	I1119 23:09:15.550540       1 main.go:324] Node ha-487903-m02 has CIDR [10.244.1.0/24] 
	I1119 23:09:15.551055       1 main.go:297] Handling node with IPs: map[192.168.39.160:{}]
	I1119 23:09:15.551067       1 main.go:324] Node ha-487903-m03 has CIDR [10.244.2.0/24] 
	I1119 23:09:15.551996       1 main.go:297] Handling node with IPs: map[192.168.39.187:{}]
	I1119 23:09:15.552036       1 main.go:324] Node ha-487903-m04 has CIDR [10.244.3.0/24] 
	I1119 23:09:15.552376       1 main.go:297] Handling node with IPs: map[192.168.39.250:{}]
	I1119 23:09:15.552388       1 main.go:324] Node ha-487903-m05 has CIDR [10.244.4.0/24] 
	I1119 23:09:15.552999       1 main.go:297] Handling node with IPs: map[192.168.39.15:{}]
	I1119 23:09:15.553029       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0a3ebfa791420fd99d627281216572fc41635c59a2e823681f10e0265a5601af] <==
	I1119 23:06:41.578069       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1119 23:06:41.578813       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1119 23:06:41.578895       1 policy_source.go:240] refreshing policies
	I1119 23:06:41.608674       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 23:06:41.652233       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1119 23:06:41.655394       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1119 23:06:41.655821       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1119 23:06:41.655850       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1119 23:06:41.656332       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1119 23:06:41.656371       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 23:06:41.656393       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 23:06:41.661422       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 23:06:41.661498       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1119 23:06:41.661574       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 23:06:41.678861       1 cache.go:39] Caches are synced for autoregister controller
	W1119 23:06:41.766283       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.191]
	I1119 23:06:41.770787       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 23:06:41.846071       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1119 23:06:41.851314       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1119 23:06:42.378977       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 23:06:42.473024       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1119 23:06:45.193304       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.15 192.168.39.191]
	I1119 23:06:47.599355       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 23:06:47.956548       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 23:06:50.470006       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-apiserver [37548c727f81afd8dd40f5cf29964a61c4e4370cbb603376f96287d018620cb4] <==
	I1119 23:06:11.880236       1 server.go:150] Version: v1.34.1
	I1119 23:06:11.880286       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1119 23:06:12.813039       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1119 23:06:12.813073       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1119 23:06:12.813086       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1119 23:06:12.813090       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1119 23:06:12.813094       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1119 23:06:12.813097       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1119 23:06:12.813101       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1119 23:06:12.813104       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1119 23:06:12.813108       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1119 23:06:12.813111       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1119 23:06:12.813114       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1119 23:06:12.813118       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1119 23:06:12.905211       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1119 23:06:12.913843       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1119 23:06:12.920093       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1119 23:06:12.966564       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1119 23:06:12.985714       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1119 23:06:12.985841       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1119 23:06:12.986449       1 instance.go:239] Using reconciler: lease
	W1119 23:06:12.991441       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1119 23:06:32.899983       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1119 23:06:32.912361       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1119 23:06:32.990473       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [407c1906949db307229dfea169cf035af7ee02aa72d4a48566dbe3b8c43189a6] <==
	I1119 23:06:47.649012       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 23:06:47.649925       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 23:06:47.650061       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 23:06:47.652900       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1119 23:06:47.653973       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 23:06:47.654043       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 23:06:47.654066       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 23:06:47.655286       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 23:06:47.658251       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 23:06:47.661337       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 23:06:47.661495       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 23:06:47.665057       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 23:06:47.668198       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 23:06:47.718631       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-487903-m04"
	I1119 23:06:47.722547       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-487903"
	I1119 23:06:47.722625       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-487903-m02"
	I1119 23:06:47.722698       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-487903-m03"
	I1119 23:06:47.725022       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1119 23:07:42.933678       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-487903-m04"
	E1119 23:08:56.919856       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-6g7fs failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-6g7fs\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1119 23:08:57.480111       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-487903-m04"
	I1119 23:08:57.481547       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-487903-m05\" does not exist"
	I1119 23:08:57.506476       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-487903-m05" podCIDRs=["10.244.4.0/24"]
	I1119 23:08:57.771217       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-487903-m05"
	I1119 23:09:14.090514       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-487903-m04"
	
	
	==> kube-controller-manager [b7d9fc5b2567d02f5794d850ec1778030b3378c830c952d472803d0582d5904b] <==
	I1119 23:06:13.347369       1 serving.go:386] Generated self-signed cert in-memory
	I1119 23:06:14.236064       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1119 23:06:14.236118       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 23:06:14.241243       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1119 23:06:14.241453       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1119 23:06:14.242515       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1119 23:06:14.242958       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1119 23:06:41.727088       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[-]poststarthook/bootstrap-controller failed: reason withheld\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the reques
t from succeeding"
	
	
	==> kube-proxy [8e1ce69b078fd9d69a966ab3ae75b0a9a20f1716235f7fd123429ac0fe73bb0b] <==
	I1119 23:06:45.377032       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 23:06:45.478419       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 23:06:45.478668       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.15"]
	E1119 23:06:45.478924       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 23:06:45.554663       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1119 23:06:45.554766       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1119 23:06:45.554814       1 server_linux.go:132] "Using iptables Proxier"
	I1119 23:06:45.584249       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 23:06:45.586108       1 server.go:527] "Version info" version="v1.34.1"
	I1119 23:06:45.586390       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 23:06:45.595385       1 config.go:200] "Starting service config controller"
	I1119 23:06:45.595503       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 23:06:45.595536       1 config.go:106] "Starting endpoint slice config controller"
	I1119 23:06:45.595628       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 23:06:45.595660       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 23:06:45.595795       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 23:06:45.601653       1 config.go:309] "Starting node config controller"
	I1119 23:06:45.601683       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 23:06:45.601692       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 23:06:45.697008       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 23:06:45.701060       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 23:06:45.701074       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [fead33c061a4deb0b1eb4ee9dd3e9e724dade2871a97a7aad79bef05acbd4a07] <==
	I1119 23:06:14.220668       1 serving.go:386] Generated self-signed cert in-memory
	W1119 23:06:24.867573       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.39.15:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W1119 23:06:24.867603       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1119 23:06:24.867609       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1119 23:06:41.527454       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 23:06:41.527518       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 23:06:41.550229       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 23:06:41.550314       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 23:06:41.551802       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 23:06:41.551954       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 23:06:41.651239       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1119 23:08:57.671401       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-xv686\": pod kube-proxy-xv686 is already assigned to node \"ha-487903-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-xv686" node="ha-487903-m05"
	E1119 23:08:57.671692       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-xv686\": pod kube-proxy-xv686 is already assigned to node \"ha-487903-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-xv686"
	E1119 23:08:57.705927       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-nxv8w\": pod kube-proxy-nxv8w is already assigned to node \"ha-487903-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-nxv8w" node="ha-487903-m05"
	E1119 23:08:57.706059       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-nxv8w\": pod kube-proxy-nxv8w is already assigned to node \"ha-487903-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-nxv8w"
	E1119 23:08:57.705956       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-sj6n2\": pod kindnet-sj6n2 is already assigned to node \"ha-487903-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-sj6n2" node="ha-487903-m05"
	E1119 23:08:57.706700       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod df6b69ba-9090-4aed-aae7-68b8b959288d(kube-system/kindnet-sj6n2) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-sj6n2"
	E1119 23:08:57.707817       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-sj6n2\": pod kindnet-sj6n2 is already assigned to node \"ha-487903-m05\"" logger="UnhandledError" pod="kube-system/kindnet-sj6n2"
	I1119 23:08:57.707882       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-sj6n2" node="ha-487903-m05"
	
	
	==> kubelet <==
	Nov 19 23:07:20 ha-487903 kubelet[1174]: E1119 23:07:20.435143    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593640433962521  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:30 ha-487903 kubelet[1174]: E1119 23:07:30.443024    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593650441380322  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:30 ha-487903 kubelet[1174]: E1119 23:07:30.443098    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593650441380322  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:40 ha-487903 kubelet[1174]: E1119 23:07:40.446103    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593660445234723  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:40 ha-487903 kubelet[1174]: E1119 23:07:40.446443    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593660445234723  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:50 ha-487903 kubelet[1174]: E1119 23:07:50.450547    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593670449149802  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:07:50 ha-487903 kubelet[1174]: E1119 23:07:50.450679    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593670449149802  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:08:00 ha-487903 kubelet[1174]: E1119 23:08:00.452642    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593680452197882  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:08:00 ha-487903 kubelet[1174]: E1119 23:08:00.452706    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593680452197882  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:08:10 ha-487903 kubelet[1174]: E1119 23:08:10.456047    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593690454418478  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:08:10 ha-487903 kubelet[1174]: E1119 23:08:10.456177    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593690454418478  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:08:20 ha-487903 kubelet[1174]: E1119 23:08:20.458413    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593700457879325  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:08:20 ha-487903 kubelet[1174]: E1119 23:08:20.458447    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593700457879325  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:08:30 ha-487903 kubelet[1174]: E1119 23:08:30.460710    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593710460384024  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:08:30 ha-487903 kubelet[1174]: E1119 23:08:30.460817    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593710460384024  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:08:40 ha-487903 kubelet[1174]: E1119 23:08:40.469974    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593720465271267  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:08:40 ha-487903 kubelet[1174]: E1119 23:08:40.470287    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593720465271267  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:08:50 ha-487903 kubelet[1174]: E1119 23:08:50.471932    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593730471534019  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:08:50 ha-487903 kubelet[1174]: E1119 23:08:50.471982    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593730471534019  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:09:00 ha-487903 kubelet[1174]: E1119 23:09:00.477445    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593740474843600  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:09:00 ha-487903 kubelet[1174]: E1119 23:09:00.478339    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593740474843600  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:09:10 ha-487903 kubelet[1174]: E1119 23:09:10.481303    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593750480821927  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:09:10 ha-487903 kubelet[1174]: E1119 23:09:10.481376    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593750480821927  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:09:20 ha-487903 kubelet[1174]: E1119 23:09:20.487687    1174 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763593760487094990  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	Nov 19 23:09:20 ha-487903 kubelet[1174]: E1119 23:09:20.487819    1174 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763593760487094990  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:156770}  inodes_used:{value:72}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-487903 -n ha-487903
helpers_test.go:269: (dbg) Run:  kubectl --context ha-487903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (3.49s)

                                                
                                    
x
+
TestPreload (159.28s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-529794 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-529794 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m36.838788758s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-529794 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-529794 image pull gcr.io/k8s-minikube/busybox: (2.547745178s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-529794
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-529794: (7.015616364s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-529794 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E1119 23:29:25.094584  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-529794 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (49.943008411s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-529794 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/pause:3.1
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-11-19 23:29:39.279698978 +0000 UTC m=+6159.998268865
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-529794 -n test-preload-529794
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-529794 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-529794 logs -n 25: (1.123796318s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
	โ”‚ COMMAND โ”‚                                                                           ARGS                                                                            โ”‚       PROFILE        โ”‚  USER   โ”‚ VERSION โ”‚     START TIME      โ”‚      END TIME       โ”‚
	โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
	โ”‚ ssh     โ”‚ multinode-966771 ssh -n multinode-966771-m03 sudo cat /home/docker/cp-test.txt                                                                            โ”‚ multinode-966771     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:15 UTC โ”‚ 19 Nov 25 23:15 UTC โ”‚
	โ”‚ ssh     โ”‚ multinode-966771 ssh -n multinode-966771 sudo cat /home/docker/cp-test_multinode-966771-m03_multinode-966771.txt                                          โ”‚ multinode-966771     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:15 UTC โ”‚ 19 Nov 25 23:15 UTC โ”‚
	โ”‚ cp      โ”‚ multinode-966771 cp multinode-966771-m03:/home/docker/cp-test.txt multinode-966771-m02:/home/docker/cp-test_multinode-966771-m03_multinode-966771-m02.txt โ”‚ multinode-966771     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:15 UTC โ”‚ 19 Nov 25 23:15 UTC โ”‚
	โ”‚ ssh     โ”‚ multinode-966771 ssh -n multinode-966771-m03 sudo cat /home/docker/cp-test.txt                                                                            โ”‚ multinode-966771     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:15 UTC โ”‚ 19 Nov 25 23:15 UTC โ”‚
	โ”‚ ssh     โ”‚ multinode-966771 ssh -n multinode-966771-m02 sudo cat /home/docker/cp-test_multinode-966771-m03_multinode-966771-m02.txt                                  โ”‚ multinode-966771     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:15 UTC โ”‚ 19 Nov 25 23:15 UTC โ”‚
	โ”‚ node    โ”‚ multinode-966771 node stop m03                                                                                                                            โ”‚ multinode-966771     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:15 UTC โ”‚ 19 Nov 25 23:15 UTC โ”‚
	โ”‚ node    โ”‚ multinode-966771 node start m03 -v=5 --alsologtostderr                                                                                                    โ”‚ multinode-966771     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:15 UTC โ”‚ 19 Nov 25 23:16 UTC โ”‚
	โ”‚ node    โ”‚ list -p multinode-966771                                                                                                                                  โ”‚ multinode-966771     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:16 UTC โ”‚                     โ”‚
	โ”‚ stop    โ”‚ -p multinode-966771                                                                                                                                       โ”‚ multinode-966771     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:16 UTC โ”‚ 19 Nov 25 23:19 UTC โ”‚
	โ”‚ start   โ”‚ -p multinode-966771 --wait=true -v=5 --alsologtostderr                                                                                                    โ”‚ multinode-966771     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:19 UTC โ”‚ 19 Nov 25 23:21 UTC โ”‚
	โ”‚ node    โ”‚ list -p multinode-966771                                                                                                                                  โ”‚ multinode-966771     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:21 UTC โ”‚                     โ”‚
	โ”‚ node    โ”‚ multinode-966771 node delete m03                                                                                                                          โ”‚ multinode-966771     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:21 UTC โ”‚ 19 Nov 25 23:21 UTC โ”‚
	โ”‚ stop    โ”‚ multinode-966771 stop                                                                                                                                     โ”‚ multinode-966771     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:21 UTC โ”‚ 19 Nov 25 23:24 UTC โ”‚
	โ”‚ start   โ”‚ -p multinode-966771 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            โ”‚ multinode-966771     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:24 UTC โ”‚ 19 Nov 25 23:26 UTC โ”‚
	โ”‚ node    โ”‚ list -p multinode-966771                                                                                                                                  โ”‚ multinode-966771     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:26 UTC โ”‚                     โ”‚
	โ”‚ start   โ”‚ -p multinode-966771-m02 --driver=kvm2  --container-runtime=crio                                                                                           โ”‚ multinode-966771-m02 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:26 UTC โ”‚                     โ”‚
	โ”‚ start   โ”‚ -p multinode-966771-m03 --driver=kvm2  --container-runtime=crio                                                                                           โ”‚ multinode-966771-m03 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:26 UTC โ”‚ 19 Nov 25 23:27 UTC โ”‚
	โ”‚ node    โ”‚ add -p multinode-966771                                                                                                                                   โ”‚ multinode-966771     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:27 UTC โ”‚                     โ”‚
	โ”‚ delete  โ”‚ -p multinode-966771-m03                                                                                                                                   โ”‚ multinode-966771-m03 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:27 UTC โ”‚ 19 Nov 25 23:27 UTC โ”‚
	โ”‚ delete  โ”‚ -p multinode-966771                                                                                                                                       โ”‚ multinode-966771     โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:27 UTC โ”‚ 19 Nov 25 23:27 UTC โ”‚
	โ”‚ start   โ”‚ -p test-preload-529794 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0   โ”‚ test-preload-529794  โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:27 UTC โ”‚ 19 Nov 25 23:28 UTC โ”‚
	โ”‚ image   โ”‚ test-preload-529794 image pull gcr.io/k8s-minikube/busybox                                                                                                โ”‚ test-preload-529794  โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:28 UTC โ”‚ 19 Nov 25 23:28 UTC โ”‚
	โ”‚ stop    โ”‚ -p test-preload-529794                                                                                                                                    โ”‚ test-preload-529794  โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:28 UTC โ”‚ 19 Nov 25 23:28 UTC โ”‚
	โ”‚ start   โ”‚ -p test-preload-529794 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                           โ”‚ test-preload-529794  โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:28 UTC โ”‚ 19 Nov 25 23:29 UTC โ”‚
	โ”‚ image   โ”‚ test-preload-529794 image list                                                                                                                            โ”‚ test-preload-529794  โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 23:29 UTC โ”‚ 19 Nov 25 23:29 UTC โ”‚
	โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 23:28:49
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 23:28:49.195474  153310 out.go:360] Setting OutFile to fd 1 ...
	I1119 23:28:49.195768  153310 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:28:49.195780  153310 out.go:374] Setting ErrFile to fd 2...
	I1119 23:28:49.195783  153310 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:28:49.196070  153310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-117497/.minikube/bin
	I1119 23:28:49.196551  153310 out.go:368] Setting JSON to false
	I1119 23:28:49.197448  153310 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":18676,"bootTime":1763576253,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 23:28:49.197530  153310 start.go:143] virtualization: kvm guest
	I1119 23:28:49.199443  153310 out.go:179] * [test-preload-529794] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 23:28:49.200615  153310 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 23:28:49.200615  153310 notify.go:221] Checking for updates...
	I1119 23:28:49.201749  153310 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 23:28:49.202781  153310 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 23:28:49.203767  153310 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-117497/.minikube
	I1119 23:28:49.204715  153310 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 23:28:49.205856  153310 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 23:28:49.207411  153310 config.go:182] Loaded profile config "test-preload-529794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1119 23:28:49.208969  153310 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1119 23:28:49.209968  153310 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 23:28:49.243904  153310 out.go:179] * Using the kvm2 driver based on existing profile
	I1119 23:28:49.244990  153310 start.go:309] selected driver: kvm2
	I1119 23:28:49.245006  153310 start.go:930] validating driver "kvm2" against &{Name:test-preload-529794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-529794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.219 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 23:28:49.245116  153310 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 23:28:49.246088  153310 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 23:28:49.246121  153310 cni.go:84] Creating CNI manager for ""
	I1119 23:28:49.246174  153310 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1119 23:28:49.246225  153310 start.go:353] cluster config:
	{Name:test-preload-529794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-529794 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.219 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 23:28:49.246333  153310 iso.go:125] acquiring lock: {Name:mk95e5a645bfae75190ef550e02bd4f48b331040 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 23:28:49.247629  153310 out.go:179] * Starting "test-preload-529794" primary control-plane node in "test-preload-529794" cluster
	I1119 23:28:49.248691  153310 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1119 23:28:49.268088  153310 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1119 23:28:49.268122  153310 cache.go:65] Caching tarball of preloaded images
	I1119 23:28:49.268292  153310 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1119 23:28:49.269744  153310 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1119 23:28:49.270749  153310 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1119 23:28:49.298229  153310 preload.go:295] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1119 23:28:49.298276  153310 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1119 23:28:51.728853  153310 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1119 23:28:51.729047  153310 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/test-preload-529794/config.json ...
	I1119 23:28:51.729309  153310 start.go:360] acquireMachinesLock for test-preload-529794: {Name:mk31ae761a53f3900bcf08a8c89eb1fc69e23fe3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1119 23:28:51.729393  153310 start.go:364] duration metric: took 57.79ยตs to acquireMachinesLock for "test-preload-529794"
	I1119 23:28:51.729413  153310 start.go:96] Skipping create...Using existing machine configuration
	I1119 23:28:51.729419  153310 fix.go:54] fixHost starting: 
	I1119 23:28:51.731412  153310 fix.go:112] recreateIfNeeded on test-preload-529794: state=Stopped err=<nil>
	W1119 23:28:51.731440  153310 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 23:28:51.733372  153310 out.go:252] * Restarting existing kvm2 VM for "test-preload-529794" ...
	I1119 23:28:51.733417  153310 main.go:143] libmachine: starting domain...
	I1119 23:28:51.733429  153310 main.go:143] libmachine: ensuring networks are active...
	I1119 23:28:51.734571  153310 main.go:143] libmachine: Ensuring network default is active
	I1119 23:28:51.735189  153310 main.go:143] libmachine: Ensuring network mk-test-preload-529794 is active
	I1119 23:28:51.735654  153310 main.go:143] libmachine: getting domain XML...
	I1119 23:28:51.736745  153310 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-529794</name>
	  <uuid>987a228b-72d1-46f9-96be-834acb164560</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/test-preload-529794/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21918-117497/.minikube/machines/test-preload-529794/test-preload-529794.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:c8:c3:32'/>
	      <source network='mk-test-preload-529794'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:99:79:e1'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1119 23:28:52.988928  153310 main.go:143] libmachine: waiting for domain to start...
	I1119 23:28:52.990646  153310 main.go:143] libmachine: domain is now running
	I1119 23:28:52.990670  153310 main.go:143] libmachine: waiting for IP...
	I1119 23:28:52.991490  153310 main.go:143] libmachine: domain test-preload-529794 has defined MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:28:52.992115  153310 main.go:143] libmachine: domain test-preload-529794 has current primary IP address 192.168.39.219 and MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:28:52.992129  153310 main.go:143] libmachine: found domain IP: 192.168.39.219
	I1119 23:28:52.992135  153310 main.go:143] libmachine: reserving static IP address...
	I1119 23:28:52.992513  153310 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-529794", mac: "52:54:00:c8:c3:32", ip: "192.168.39.219"} in network mk-test-preload-529794: {Iface:virbr1 ExpiryTime:2025-11-20 00:27:19 +0000 UTC Type:0 Mac:52:54:00:c8:c3:32 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:test-preload-529794 Clientid:01:52:54:00:c8:c3:32}
	I1119 23:28:52.992542  153310 main.go:143] libmachine: skip adding static IP to network mk-test-preload-529794 - found existing host DHCP lease matching {name: "test-preload-529794", mac: "52:54:00:c8:c3:32", ip: "192.168.39.219"}
	I1119 23:28:52.992551  153310 main.go:143] libmachine: reserved static IP address 192.168.39.219 for domain test-preload-529794
	I1119 23:28:52.992585  153310 main.go:143] libmachine: waiting for SSH...
	I1119 23:28:52.992593  153310 main.go:143] libmachine: Getting to WaitForSSH function...
	I1119 23:28:52.995299  153310 main.go:143] libmachine: domain test-preload-529794 has defined MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:28:52.995725  153310 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:c3:32", ip: ""} in network mk-test-preload-529794: {Iface:virbr1 ExpiryTime:2025-11-20 00:27:19 +0000 UTC Type:0 Mac:52:54:00:c8:c3:32 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:test-preload-529794 Clientid:01:52:54:00:c8:c3:32}
	I1119 23:28:52.995747  153310 main.go:143] libmachine: domain test-preload-529794 has defined IP address 192.168.39.219 and MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:28:52.995980  153310 main.go:143] libmachine: Using SSH client type: native
	I1119 23:28:52.996231  153310 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I1119 23:28:52.996244  153310 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1119 23:28:56.108212  153310 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.219:22: connect: no route to host
	I1119 23:29:02.188272  153310 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.219:22: connect: no route to host
	I1119 23:29:05.189699  153310 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.219:22: connect: connection refused
	I1119 23:29:08.301720  153310 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:29:08.305253  153310 main.go:143] libmachine: domain test-preload-529794 has defined MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:08.305606  153310 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:c3:32", ip: ""} in network mk-test-preload-529794: {Iface:virbr1 ExpiryTime:2025-11-20 00:29:04 +0000 UTC Type:0 Mac:52:54:00:c8:c3:32 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:test-preload-529794 Clientid:01:52:54:00:c8:c3:32}
	I1119 23:29:08.305635  153310 main.go:143] libmachine: domain test-preload-529794 has defined IP address 192.168.39.219 and MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:08.305819  153310 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/test-preload-529794/config.json ...
	I1119 23:29:08.306037  153310 machine.go:94] provisionDockerMachine start ...
	I1119 23:29:08.308297  153310 main.go:143] libmachine: domain test-preload-529794 has defined MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:08.308751  153310 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:c3:32", ip: ""} in network mk-test-preload-529794: {Iface:virbr1 ExpiryTime:2025-11-20 00:29:04 +0000 UTC Type:0 Mac:52:54:00:c8:c3:32 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:test-preload-529794 Clientid:01:52:54:00:c8:c3:32}
	I1119 23:29:08.308779  153310 main.go:143] libmachine: domain test-preload-529794 has defined IP address 192.168.39.219 and MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:08.308973  153310 main.go:143] libmachine: Using SSH client type: native
	I1119 23:29:08.309167  153310 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I1119 23:29:08.309177  153310 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 23:29:08.418931  153310 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1119 23:29:08.418970  153310 buildroot.go:166] provisioning hostname "test-preload-529794"
	I1119 23:29:08.422250  153310 main.go:143] libmachine: domain test-preload-529794 has defined MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:08.422724  153310 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:c3:32", ip: ""} in network mk-test-preload-529794: {Iface:virbr1 ExpiryTime:2025-11-20 00:29:04 +0000 UTC Type:0 Mac:52:54:00:c8:c3:32 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:test-preload-529794 Clientid:01:52:54:00:c8:c3:32}
	I1119 23:29:08.422759  153310 main.go:143] libmachine: domain test-preload-529794 has defined IP address 192.168.39.219 and MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:08.422964  153310 main.go:143] libmachine: Using SSH client type: native
	I1119 23:29:08.423256  153310 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I1119 23:29:08.423269  153310 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-529794 && echo "test-preload-529794" | sudo tee /etc/hostname
	I1119 23:29:08.550921  153310 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-529794
	
	I1119 23:29:08.553837  153310 main.go:143] libmachine: domain test-preload-529794 has defined MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:08.554214  153310 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:c3:32", ip: ""} in network mk-test-preload-529794: {Iface:virbr1 ExpiryTime:2025-11-20 00:29:04 +0000 UTC Type:0 Mac:52:54:00:c8:c3:32 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:test-preload-529794 Clientid:01:52:54:00:c8:c3:32}
	I1119 23:29:08.554269  153310 main.go:143] libmachine: domain test-preload-529794 has defined IP address 192.168.39.219 and MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:08.554461  153310 main.go:143] libmachine: Using SSH client type: native
	I1119 23:29:08.554642  153310 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I1119 23:29:08.554657  153310 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-529794' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-529794/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-529794' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 23:29:08.676257  153310 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 23:29:08.676292  153310 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21918-117497/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-117497/.minikube}
	I1119 23:29:08.676331  153310 buildroot.go:174] setting up certificates
	I1119 23:29:08.676341  153310 provision.go:84] configureAuth start
	I1119 23:29:08.679029  153310 main.go:143] libmachine: domain test-preload-529794 has defined MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:08.679387  153310 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:c3:32", ip: ""} in network mk-test-preload-529794: {Iface:virbr1 ExpiryTime:2025-11-20 00:29:04 +0000 UTC Type:0 Mac:52:54:00:c8:c3:32 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:test-preload-529794 Clientid:01:52:54:00:c8:c3:32}
	I1119 23:29:08.679409  153310 main.go:143] libmachine: domain test-preload-529794 has defined IP address 192.168.39.219 and MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:08.681504  153310 main.go:143] libmachine: domain test-preload-529794 has defined MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:08.681783  153310 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:c3:32", ip: ""} in network mk-test-preload-529794: {Iface:virbr1 ExpiryTime:2025-11-20 00:29:04 +0000 UTC Type:0 Mac:52:54:00:c8:c3:32 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:test-preload-529794 Clientid:01:52:54:00:c8:c3:32}
	I1119 23:29:08.681800  153310 main.go:143] libmachine: domain test-preload-529794 has defined IP address 192.168.39.219 and MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:08.681913  153310 provision.go:143] copyHostCerts
	I1119 23:29:08.681976  153310 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem, removing ...
	I1119 23:29:08.681996  153310 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem
	I1119 23:29:08.682063  153310 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/ca.pem (1078 bytes)
	I1119 23:29:08.682155  153310 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem, removing ...
	I1119 23:29:08.682163  153310 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem
	I1119 23:29:08.682190  153310 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/cert.pem (1123 bytes)
	I1119 23:29:08.682239  153310 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem, removing ...
	I1119 23:29:08.682249  153310 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem
	I1119 23:29:08.682274  153310 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-117497/.minikube/key.pem (1675 bytes)
	I1119 23:29:08.682320  153310 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem org=jenkins.test-preload-529794 san=[127.0.0.1 192.168.39.219 localhost minikube test-preload-529794]
	I1119 23:29:08.734684  153310 provision.go:177] copyRemoteCerts
	I1119 23:29:08.734759  153310 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 23:29:08.737108  153310 main.go:143] libmachine: domain test-preload-529794 has defined MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:08.737439  153310 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:c3:32", ip: ""} in network mk-test-preload-529794: {Iface:virbr1 ExpiryTime:2025-11-20 00:29:04 +0000 UTC Type:0 Mac:52:54:00:c8:c3:32 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:test-preload-529794 Clientid:01:52:54:00:c8:c3:32}
	I1119 23:29:08.737459  153310 main.go:143] libmachine: domain test-preload-529794 has defined IP address 192.168.39.219 and MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:08.737591  153310 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/test-preload-529794/id_rsa Username:docker}
	I1119 23:29:08.826836  153310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1119 23:29:08.860443  153310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1119 23:29:08.892112  153310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 23:29:08.922915  153310 provision.go:87] duration metric: took 246.558344ms to configureAuth
	I1119 23:29:08.922943  153310 buildroot.go:189] setting minikube options for container-runtime
	I1119 23:29:08.923117  153310 config.go:182] Loaded profile config "test-preload-529794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1119 23:29:08.925748  153310 main.go:143] libmachine: domain test-preload-529794 has defined MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:08.926117  153310 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:c3:32", ip: ""} in network mk-test-preload-529794: {Iface:virbr1 ExpiryTime:2025-11-20 00:29:04 +0000 UTC Type:0 Mac:52:54:00:c8:c3:32 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:test-preload-529794 Clientid:01:52:54:00:c8:c3:32}
	I1119 23:29:08.926141  153310 main.go:143] libmachine: domain test-preload-529794 has defined IP address 192.168.39.219 and MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:08.926286  153310 main.go:143] libmachine: Using SSH client type: native
	I1119 23:29:08.926467  153310 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I1119 23:29:08.926482  153310 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 23:29:09.177139  153310 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 23:29:09.177172  153310 machine.go:97] duration metric: took 871.118765ms to provisionDockerMachine
	I1119 23:29:09.177186  153310 start.go:293] postStartSetup for "test-preload-529794" (driver="kvm2")
	I1119 23:29:09.177197  153310 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 23:29:09.177265  153310 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 23:29:09.180354  153310 main.go:143] libmachine: domain test-preload-529794 has defined MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:09.180701  153310 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:c3:32", ip: ""} in network mk-test-preload-529794: {Iface:virbr1 ExpiryTime:2025-11-20 00:29:04 +0000 UTC Type:0 Mac:52:54:00:c8:c3:32 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:test-preload-529794 Clientid:01:52:54:00:c8:c3:32}
	I1119 23:29:09.180726  153310 main.go:143] libmachine: domain test-preload-529794 has defined IP address 192.168.39.219 and MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:09.180860  153310 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/test-preload-529794/id_rsa Username:docker}
	I1119 23:29:09.267190  153310 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 23:29:09.272446  153310 info.go:137] Remote host: Buildroot 2025.02
	I1119 23:29:09.272476  153310 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/addons for local assets ...
	I1119 23:29:09.272557  153310 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-117497/.minikube/files for local assets ...
	I1119 23:29:09.272674  153310 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem -> 1213692.pem in /etc/ssl/certs
	I1119 23:29:09.272809  153310 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 23:29:09.284733  153310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:29:09.315343  153310 start.go:296] duration metric: took 138.138628ms for postStartSetup
	I1119 23:29:09.315385  153310 fix.go:56] duration metric: took 17.58596586s for fixHost
	I1119 23:29:09.318055  153310 main.go:143] libmachine: domain test-preload-529794 has defined MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:09.318435  153310 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:c3:32", ip: ""} in network mk-test-preload-529794: {Iface:virbr1 ExpiryTime:2025-11-20 00:29:04 +0000 UTC Type:0 Mac:52:54:00:c8:c3:32 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:test-preload-529794 Clientid:01:52:54:00:c8:c3:32}
	I1119 23:29:09.318464  153310 main.go:143] libmachine: domain test-preload-529794 has defined IP address 192.168.39.219 and MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:09.318629  153310 main.go:143] libmachine: Using SSH client type: native
	I1119 23:29:09.318821  153310 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I1119 23:29:09.318831  153310 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1119 23:29:09.427756  153310 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763594949.391651514
	
	I1119 23:29:09.427795  153310 fix.go:216] guest clock: 1763594949.391651514
	I1119 23:29:09.427808  153310 fix.go:229] Guest: 2025-11-19 23:29:09.391651514 +0000 UTC Remote: 2025-11-19 23:29:09.31538866 +0000 UTC m=+20.168706753 (delta=76.262854ms)
	I1119 23:29:09.427833  153310 fix.go:200] guest clock delta is within tolerance: 76.262854ms
	I1119 23:29:09.427841  153310 start.go:83] releasing machines lock for "test-preload-529794", held for 17.698434398s
	I1119 23:29:09.430891  153310 main.go:143] libmachine: domain test-preload-529794 has defined MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:09.431317  153310 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:c3:32", ip: ""} in network mk-test-preload-529794: {Iface:virbr1 ExpiryTime:2025-11-20 00:29:04 +0000 UTC Type:0 Mac:52:54:00:c8:c3:32 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:test-preload-529794 Clientid:01:52:54:00:c8:c3:32}
	I1119 23:29:09.431343  153310 main.go:143] libmachine: domain test-preload-529794 has defined IP address 192.168.39.219 and MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:09.431840  153310 ssh_runner.go:195] Run: cat /version.json
	I1119 23:29:09.431902  153310 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 23:29:09.434737  153310 main.go:143] libmachine: domain test-preload-529794 has defined MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:09.435117  153310 main.go:143] libmachine: domain test-preload-529794 has defined MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:09.435151  153310 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:c3:32", ip: ""} in network mk-test-preload-529794: {Iface:virbr1 ExpiryTime:2025-11-20 00:29:04 +0000 UTC Type:0 Mac:52:54:00:c8:c3:32 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:test-preload-529794 Clientid:01:52:54:00:c8:c3:32}
	I1119 23:29:09.435176  153310 main.go:143] libmachine: domain test-preload-529794 has defined IP address 192.168.39.219 and MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:09.435335  153310 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/test-preload-529794/id_rsa Username:docker}
	I1119 23:29:09.435584  153310 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:c3:32", ip: ""} in network mk-test-preload-529794: {Iface:virbr1 ExpiryTime:2025-11-20 00:29:04 +0000 UTC Type:0 Mac:52:54:00:c8:c3:32 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:test-preload-529794 Clientid:01:52:54:00:c8:c3:32}
	I1119 23:29:09.435617  153310 main.go:143] libmachine: domain test-preload-529794 has defined IP address 192.168.39.219 and MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:09.435770  153310 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/test-preload-529794/id_rsa Username:docker}
	I1119 23:29:09.513597  153310 ssh_runner.go:195] Run: systemctl --version
	I1119 23:29:09.539910  153310 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 23:29:09.686462  153310 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 23:29:09.693897  153310 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 23:29:09.693974  153310 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 23:29:09.715199  153310 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 23:29:09.715230  153310 start.go:496] detecting cgroup driver to use...
	I1119 23:29:09.715313  153310 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 23:29:09.736552  153310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 23:29:09.755086  153310 docker.go:218] disabling cri-docker service (if available) ...
	I1119 23:29:09.755167  153310 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 23:29:09.774018  153310 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 23:29:09.792023  153310 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 23:29:09.947859  153310 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 23:29:10.165995  153310 docker.go:234] disabling docker service ...
	I1119 23:29:10.166071  153310 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 23:29:10.183126  153310 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 23:29:10.198716  153310 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 23:29:10.357138  153310 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 23:29:10.506949  153310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 23:29:10.525010  153310 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 23:29:10.549468  153310 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1119 23:29:10.549534  153310 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:29:10.563313  153310 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1119 23:29:10.563380  153310 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:29:10.577741  153310 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:29:10.590736  153310 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:29:10.604504  153310 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 23:29:10.617892  153310 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:29:10.630916  153310 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:29:10.652677  153310 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 23:29:10.666404  153310 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 23:29:10.676971  153310 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1119 23:29:10.677045  153310 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1119 23:29:10.697810  153310 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 23:29:10.710845  153310 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:29:10.854638  153310 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 23:29:10.966896  153310 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 23:29:10.966988  153310 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 23:29:10.973084  153310 start.go:564] Will wait 60s for crictl version
	I1119 23:29:10.973164  153310 ssh_runner.go:195] Run: which crictl
	I1119 23:29:10.977613  153310 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1119 23:29:11.021119  153310 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1119 23:29:11.021195  153310 ssh_runner.go:195] Run: crio --version
	I1119 23:29:11.051749  153310 ssh_runner.go:195] Run: crio --version
	I1119 23:29:11.084469  153310 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1119 23:29:11.088429  153310 main.go:143] libmachine: domain test-preload-529794 has defined MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:11.088900  153310 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:c3:32", ip: ""} in network mk-test-preload-529794: {Iface:virbr1 ExpiryTime:2025-11-20 00:29:04 +0000 UTC Type:0 Mac:52:54:00:c8:c3:32 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:test-preload-529794 Clientid:01:52:54:00:c8:c3:32}
	I1119 23:29:11.088935  153310 main.go:143] libmachine: domain test-preload-529794 has defined IP address 192.168.39.219 and MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:11.089163  153310 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1119 23:29:11.094048  153310 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:29:11.109811  153310 kubeadm.go:884] updating cluster {Name:test-preload-529794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-529794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.219 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 23:29:11.110004  153310 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1119 23:29:11.110069  153310 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:29:11.152562  153310 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1119 23:29:11.152633  153310 ssh_runner.go:195] Run: which lz4
	I1119 23:29:11.157260  153310 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1119 23:29:11.162411  153310 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1119 23:29:11.162455  153310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1119 23:29:12.735779  153310 crio.go:462] duration metric: took 1.578572178s to copy over tarball
	I1119 23:29:12.735851  153310 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1119 23:29:14.429716  153310 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.693833113s)
	I1119 23:29:14.429748  153310 crio.go:469] duration metric: took 1.693938486s to extract the tarball
	I1119 23:29:14.429756  153310 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1119 23:29:14.470841  153310 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:29:14.514899  153310 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 23:29:14.514930  153310 cache_images.go:86] Images are preloaded, skipping loading
	I1119 23:29:14.514941  153310 kubeadm.go:935] updating node { 192.168.39.219 8443 v1.32.0 crio true true} ...
	I1119 23:29:14.515069  153310 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-529794 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.219
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-529794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 23:29:14.515161  153310 ssh_runner.go:195] Run: crio config
	I1119 23:29:14.562347  153310 cni.go:84] Creating CNI manager for ""
	I1119 23:29:14.562381  153310 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1119 23:29:14.562405  153310 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 23:29:14.562438  153310 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.219 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-529794 NodeName:test-preload-529794 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.219"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.219 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 23:29:14.562635  153310 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.219
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-529794"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.219"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.219"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 23:29:14.562697  153310 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1119 23:29:14.575682  153310 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 23:29:14.575772  153310 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 23:29:14.587754  153310 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1119 23:29:14.608766  153310 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 23:29:14.629664  153310 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1119 23:29:14.653384  153310 ssh_runner.go:195] Run: grep 192.168.39.219	control-plane.minikube.internal$ /etc/hosts
	I1119 23:29:14.657758  153310 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.219	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 23:29:14.672714  153310 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:29:14.822512  153310 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:29:14.854657  153310 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/test-preload-529794 for IP: 192.168.39.219
	I1119 23:29:14.854678  153310 certs.go:195] generating shared ca certs ...
	I1119 23:29:14.854701  153310 certs.go:227] acquiring lock for ca certs: {Name:mk7fd1f1adfef6333505c39c1a982465562c820e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:29:14.854870  153310 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key
	I1119 23:29:14.854945  153310 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key
	I1119 23:29:14.854960  153310 certs.go:257] generating profile certs ...
	I1119 23:29:14.855057  153310 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/test-preload-529794/client.key
	I1119 23:29:14.855154  153310 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/test-preload-529794/apiserver.key.74bf214b
	I1119 23:29:14.855215  153310 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/test-preload-529794/proxy-client.key
	I1119 23:29:14.855362  153310 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem (1338 bytes)
	W1119 23:29:14.855403  153310 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369_empty.pem, impossibly tiny 0 bytes
	I1119 23:29:14.855430  153310 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca-key.pem (1675 bytes)
	I1119 23:29:14.855472  153310 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/ca.pem (1078 bytes)
	I1119 23:29:14.855506  153310 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/cert.pem (1123 bytes)
	I1119 23:29:14.855538  153310 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/certs/key.pem (1675 bytes)
	I1119 23:29:14.855595  153310 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem (1708 bytes)
	I1119 23:29:14.856195  153310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 23:29:14.904908  153310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1119 23:29:14.942663  153310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 23:29:14.973645  153310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 23:29:15.007461  153310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/test-preload-529794/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1119 23:29:15.039572  153310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/test-preload-529794/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 23:29:15.072764  153310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/test-preload-529794/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 23:29:15.104331  153310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/test-preload-529794/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 23:29:15.136681  153310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/ssl/certs/1213692.pem --> /usr/share/ca-certificates/1213692.pem (1708 bytes)
	I1119 23:29:15.167846  153310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 23:29:15.198655  153310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-117497/.minikube/certs/121369.pem --> /usr/share/ca-certificates/121369.pem (1338 bytes)
	I1119 23:29:15.229904  153310 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 23:29:15.251918  153310 ssh_runner.go:195] Run: openssl version
	I1119 23:29:15.258999  153310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121369.pem && ln -fs /usr/share/ca-certificates/121369.pem /etc/ssl/certs/121369.pem"
	I1119 23:29:15.273226  153310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121369.pem
	I1119 23:29:15.279077  153310 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:54 /usr/share/ca-certificates/121369.pem
	I1119 23:29:15.279155  153310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121369.pem
	I1119 23:29:15.286836  153310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/121369.pem /etc/ssl/certs/51391683.0"
	I1119 23:29:15.300957  153310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1213692.pem && ln -fs /usr/share/ca-certificates/1213692.pem /etc/ssl/certs/1213692.pem"
	I1119 23:29:15.314784  153310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1213692.pem
	I1119 23:29:15.320485  153310 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:54 /usr/share/ca-certificates/1213692.pem
	I1119 23:29:15.320557  153310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1213692.pem
	I1119 23:29:15.328250  153310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1213692.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 23:29:15.342912  153310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 23:29:15.357598  153310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:29:15.363922  153310 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:29:15.364001  153310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 23:29:15.371967  153310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 23:29:15.387564  153310 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 23:29:15.393550  153310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 23:29:15.402079  153310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 23:29:15.410104  153310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 23:29:15.418470  153310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 23:29:15.426573  153310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 23:29:15.434751  153310 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 23:29:15.442831  153310 kubeadm.go:401] StartCluster: {Name:test-preload-529794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-529794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.219 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 23:29:15.442933  153310 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 23:29:15.442985  153310 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 23:29:15.486790  153310 cri.go:89] found id: ""
	I1119 23:29:15.486909  153310 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 23:29:15.500176  153310 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 23:29:15.500207  153310 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 23:29:15.500259  153310 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 23:29:15.512717  153310 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 23:29:15.513239  153310 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-529794" does not appear in /home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 23:29:15.513351  153310 kubeconfig.go:62] /home/jenkins/minikube-integration/21918-117497/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-529794" cluster setting kubeconfig missing "test-preload-529794" context setting]
	I1119 23:29:15.513587  153310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/kubeconfig: {Name:mk079b5c589536bd895a38d6eaf0adbccf891fc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:29:15.514164  153310 kapi.go:59] client config for test-preload-529794: &rest.Config{Host:"https://192.168.39.219:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/test-preload-529794/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/test-preload-529794/client.key", CAFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1119 23:29:15.514544  153310 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1119 23:29:15.514556  153310 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1119 23:29:15.514560  153310 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1119 23:29:15.514564  153310 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1119 23:29:15.514568  153310 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1119 23:29:15.514943  153310 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 23:29:15.527123  153310 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.219
	I1119 23:29:15.527156  153310 kubeadm.go:1161] stopping kube-system containers ...
	I1119 23:29:15.527170  153310 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1119 23:29:15.527224  153310 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 23:29:15.576125  153310 cri.go:89] found id: ""
	I1119 23:29:15.576220  153310 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1119 23:29:15.604986  153310 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 23:29:15.618312  153310 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 23:29:15.618332  153310 kubeadm.go:158] found existing configuration files:
	
	I1119 23:29:15.618378  153310 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 23:29:15.630230  153310 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 23:29:15.630303  153310 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 23:29:15.642623  153310 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 23:29:15.654013  153310 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 23:29:15.654092  153310 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 23:29:15.666636  153310 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 23:29:15.678454  153310 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 23:29:15.678536  153310 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 23:29:15.691143  153310 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 23:29:15.702677  153310 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 23:29:15.702757  153310 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 23:29:15.715682  153310 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 23:29:15.729417  153310 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1119 23:29:15.790024  153310 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1119 23:29:16.981799  153310 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.191733132s)
	I1119 23:29:16.981891  153310 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1119 23:29:17.240318  153310 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1119 23:29:17.314841  153310 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1119 23:29:17.400743  153310 api_server.go:52] waiting for apiserver process to appear ...
	I1119 23:29:17.400830  153310 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 23:29:17.901005  153310 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 23:29:18.401067  153310 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 23:29:18.901451  153310 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 23:29:19.401496  153310 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 23:29:19.901390  153310 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 23:29:19.934594  153310 api_server.go:72] duration metric: took 2.533865266s to wait for apiserver process to appear ...
	I1119 23:29:19.934631  153310 api_server.go:88] waiting for apiserver healthz status ...
	I1119 23:29:19.934659  153310 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8443/healthz ...
	I1119 23:29:22.422631  153310 api_server.go:279] https://192.168.39.219:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1119 23:29:22.422665  153310 api_server.go:103] status: https://192.168.39.219:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1119 23:29:22.422683  153310 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8443/healthz ...
	I1119 23:29:22.446994  153310 api_server.go:279] https://192.168.39.219:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1119 23:29:22.447026  153310 api_server.go:103] status: https://192.168.39.219:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1119 23:29:22.447040  153310 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8443/healthz ...
	I1119 23:29:22.537310  153310 api_server.go:279] https://192.168.39.219:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:29:22.537348  153310 api_server.go:103] status: https://192.168.39.219:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:29:22.934925  153310 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8443/healthz ...
	I1119 23:29:22.941114  153310 api_server.go:279] https://192.168.39.219:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:29:22.941139  153310 api_server.go:103] status: https://192.168.39.219:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:29:23.434828  153310 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8443/healthz ...
	I1119 23:29:23.441929  153310 api_server.go:279] https://192.168.39.219:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 23:29:23.441956  153310 api_server.go:103] status: https://192.168.39.219:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 23:29:23.935719  153310 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8443/healthz ...
	I1119 23:29:23.941334  153310 api_server.go:279] https://192.168.39.219:8443/healthz returned 200:
	ok
	I1119 23:29:23.947827  153310 api_server.go:141] control plane version: v1.32.0
	I1119 23:29:23.947859  153310 api_server.go:131] duration metric: took 4.013218965s to wait for apiserver health ...
	I1119 23:29:23.947871  153310 cni.go:84] Creating CNI manager for ""
	I1119 23:29:23.947888  153310 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1119 23:29:23.949594  153310 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1119 23:29:23.950841  153310 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1119 23:29:23.964718  153310 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1119 23:29:23.990302  153310 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 23:29:23.999534  153310 system_pods.go:59] 7 kube-system pods found
	I1119 23:29:23.999584  153310 system_pods.go:61] "coredns-668d6bf9bc-z5jsw" [11f6de19-d210-4fd2-8c15-f89ec737004e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 23:29:23.999605  153310 system_pods.go:61] "etcd-test-preload-529794" [2b0bad64-ebc5-4e77-a7ac-d6ca24613feb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 23:29:23.999620  153310 system_pods.go:61] "kube-apiserver-test-preload-529794" [9ef1b0ae-9f15-45c8-b381-eb39bcbecb7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 23:29:23.999636  153310 system_pods.go:61] "kube-controller-manager-test-preload-529794" [dca7b636-b988-48f4-a22a-02fc03f0fac9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 23:29:23.999646  153310 system_pods.go:61] "kube-proxy-spxmx" [c48b124c-feac-4978-9753-db4dea33dc6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1119 23:29:23.999660  153310 system_pods.go:61] "kube-scheduler-test-preload-529794" [909c0ea6-b546-4b96-9a50-6c647726b439] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 23:29:23.999674  153310 system_pods.go:61] "storage-provisioner" [dc8c4178-c36c-4791-ac83-01eccaf76b04] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 23:29:23.999687  153310 system_pods.go:74] duration metric: took 9.355921ms to wait for pod list to return data ...
	I1119 23:29:23.999702  153310 node_conditions.go:102] verifying NodePressure condition ...
	I1119 23:29:24.005867  153310 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:29:24.005903  153310 node_conditions.go:123] node cpu capacity is 2
	I1119 23:29:24.005914  153310 node_conditions.go:105] duration metric: took 6.207308ms to run NodePressure ...
	I1119 23:29:24.006001  153310 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1119 23:29:24.308301  153310 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1119 23:29:24.313185  153310 kubeadm.go:744] kubelet initialised
	I1119 23:29:24.313212  153310 kubeadm.go:745] duration metric: took 4.88485ms waiting for restarted kubelet to initialise ...
	I1119 23:29:24.313232  153310 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 23:29:24.333227  153310 ops.go:34] apiserver oom_adj: -16
	I1119 23:29:24.333254  153310 kubeadm.go:602] duration metric: took 8.833039229s to restartPrimaryControlPlane
	I1119 23:29:24.333268  153310 kubeadm.go:403] duration metric: took 8.890447234s to StartCluster
	I1119 23:29:24.333302  153310 settings.go:142] acquiring lock: {Name:mk7bf46f049c1d627501587bc2954f8687f12cc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:29:24.333401  153310 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 23:29:24.333951  153310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-117497/kubeconfig: {Name:mk079b5c589536bd895a38d6eaf0adbccf891fc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 23:29:24.334207  153310 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.219 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 23:29:24.334289  153310 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 23:29:24.334390  153310 addons.go:70] Setting storage-provisioner=true in profile "test-preload-529794"
	I1119 23:29:24.334412  153310 addons.go:239] Setting addon storage-provisioner=true in "test-preload-529794"
	W1119 23:29:24.334422  153310 addons.go:248] addon storage-provisioner should already be in state true
	I1119 23:29:24.334422  153310 addons.go:70] Setting default-storageclass=true in profile "test-preload-529794"
	I1119 23:29:24.334448  153310 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-529794"
	I1119 23:29:24.334454  153310 host.go:66] Checking if "test-preload-529794" exists ...
	I1119 23:29:24.334453  153310 config.go:182] Loaded profile config "test-preload-529794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1119 23:29:24.334559  153310 cache.go:107] acquiring lock: {Name:mk7073b8eca670d2c11dece9275947688dc7c859 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 23:29:24.334691  153310 cache.go:115] /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1119 23:29:24.334707  153310 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 190.972ยตs
	I1119 23:29:24.334721  153310 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1119 23:29:24.334734  153310 cache.go:87] Successfully saved all images to host disk.
	I1119 23:29:24.334854  153310 config.go:182] Loaded profile config "test-preload-529794": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1119 23:29:24.335699  153310 out.go:179] * Verifying Kubernetes components...
	I1119 23:29:24.337061  153310 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 23:29:24.337065  153310 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 23:29:24.337397  153310 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 23:29:24.337524  153310 kapi.go:59] client config for test-preload-529794: &rest.Config{Host:"https://192.168.39.219:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/test-preload-529794/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/test-preload-529794/client.key", CAFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1119 23:29:24.337964  153310 addons.go:239] Setting addon default-storageclass=true in "test-preload-529794"
	W1119 23:29:24.337989  153310 addons.go:248] addon default-storageclass should already be in state true
	I1119 23:29:24.338015  153310 host.go:66] Checking if "test-preload-529794" exists ...
	I1119 23:29:24.338225  153310 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 23:29:24.338242  153310 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 23:29:24.340355  153310 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 23:29:24.340375  153310 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 23:29:24.340426  153310 main.go:143] libmachine: domain test-preload-529794 has defined MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:24.341058  153310 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:c3:32", ip: ""} in network mk-test-preload-529794: {Iface:virbr1 ExpiryTime:2025-11-20 00:29:04 +0000 UTC Type:0 Mac:52:54:00:c8:c3:32 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:test-preload-529794 Clientid:01:52:54:00:c8:c3:32}
	I1119 23:29:24.341091  153310 main.go:143] libmachine: domain test-preload-529794 has defined IP address 192.168.39.219 and MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:24.341299  153310 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/test-preload-529794/id_rsa Username:docker}
	I1119 23:29:24.341807  153310 main.go:143] libmachine: domain test-preload-529794 has defined MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:24.342349  153310 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:c3:32", ip: ""} in network mk-test-preload-529794: {Iface:virbr1 ExpiryTime:2025-11-20 00:29:04 +0000 UTC Type:0 Mac:52:54:00:c8:c3:32 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:test-preload-529794 Clientid:01:52:54:00:c8:c3:32}
	I1119 23:29:24.342405  153310 main.go:143] libmachine: domain test-preload-529794 has defined IP address 192.168.39.219 and MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:24.342571  153310 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/test-preload-529794/id_rsa Username:docker}
	I1119 23:29:24.343283  153310 main.go:143] libmachine: domain test-preload-529794 has defined MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:24.343673  153310 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c8:c3:32", ip: ""} in network mk-test-preload-529794: {Iface:virbr1 ExpiryTime:2025-11-20 00:29:04 +0000 UTC Type:0 Mac:52:54:00:c8:c3:32 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:test-preload-529794 Clientid:01:52:54:00:c8:c3:32}
	I1119 23:29:24.343699  153310 main.go:143] libmachine: domain test-preload-529794 has defined IP address 192.168.39.219 and MAC address 52:54:00:c8:c3:32 in network mk-test-preload-529794
	I1119 23:29:24.343823  153310 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/test-preload-529794/id_rsa Username:docker}
	I1119 23:29:24.598859  153310 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 23:29:24.637665  153310 node_ready.go:35] waiting up to 6m0s for node "test-preload-529794" to be "Ready" ...
	I1119 23:29:24.726893  153310 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 23:29:24.731514  153310 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 23:29:24.731958  153310 crio.go:510] couldn't find preloaded image for "registry.k8s.io/pause:3.1". assuming images are not preloaded.
	I1119 23:29:24.731985  153310 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/pause:3.1]
	I1119 23:29:24.732043  153310 image.go:138] retrieving image: registry.k8s.io/pause:3.1
	I1119 23:29:24.733651  153310 image.go:181] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1119 23:29:24.867896  153310 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1119 23:29:25.515402  153310 cache_images.go:118] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1119 23:29:25.515445  153310 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1119 23:29:25.515499  153310 ssh_runner.go:195] Run: which crictl
	I1119 23:29:25.523968  153310 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1119 23:29:25.561337  153310 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 23:29:25.562485  153310 addons.go:515] duration metric: took 1.228196961s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 23:29:25.594395  153310 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1119 23:29:25.653923  153310 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1119 23:29:25.713130  153310 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1119 23:29:25.713241  153310 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.1
	I1119 23:29:25.719212  153310 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1119 23:29:25.719234  153310 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.1
	I1119 23:29:25.719282  153310 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1119 23:29:25.974939  153310 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-117497/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1119 23:29:25.975032  153310 cache_images.go:125] Successfully loaded all cached images
	I1119 23:29:25.975046  153310 cache_images.go:94] duration metric: took 1.24304491s to LoadCachedImages
	I1119 23:29:25.975062  153310 cache_images.go:264] succeeded pushing to: test-preload-529794
	W1119 23:29:26.641525  153310 node_ready.go:57] node "test-preload-529794" has "Ready":"False" status (will retry)
	W1119 23:29:28.642457  153310 node_ready.go:57] node "test-preload-529794" has "Ready":"False" status (will retry)
	W1119 23:29:31.141129  153310 node_ready.go:57] node "test-preload-529794" has "Ready":"False" status (will retry)
	I1119 23:29:33.141136  153310 node_ready.go:49] node "test-preload-529794" is "Ready"
	I1119 23:29:33.141180  153310 node_ready.go:38] duration metric: took 8.503449133s for node "test-preload-529794" to be "Ready" ...
	I1119 23:29:33.141203  153310 api_server.go:52] waiting for apiserver process to appear ...
	I1119 23:29:33.141278  153310 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 23:29:33.162754  153310 api_server.go:72] duration metric: took 8.828511972s to wait for apiserver process to appear ...
	I1119 23:29:33.162795  153310 api_server.go:88] waiting for apiserver healthz status ...
	I1119 23:29:33.162829  153310 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8443/healthz ...
	I1119 23:29:33.169777  153310 api_server.go:279] https://192.168.39.219:8443/healthz returned 200:
	ok
	I1119 23:29:33.170823  153310 api_server.go:141] control plane version: v1.32.0
	I1119 23:29:33.170851  153310 api_server.go:131] duration metric: took 8.047963ms to wait for apiserver health ...
	I1119 23:29:33.170865  153310 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 23:29:33.174953  153310 system_pods.go:59] 7 kube-system pods found
	I1119 23:29:33.174973  153310 system_pods.go:61] "coredns-668d6bf9bc-z5jsw" [11f6de19-d210-4fd2-8c15-f89ec737004e] Running
	I1119 23:29:33.174981  153310 system_pods.go:61] "etcd-test-preload-529794" [2b0bad64-ebc5-4e77-a7ac-d6ca24613feb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 23:29:33.174987  153310 system_pods.go:61] "kube-apiserver-test-preload-529794" [9ef1b0ae-9f15-45c8-b381-eb39bcbecb7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 23:29:33.174999  153310 system_pods.go:61] "kube-controller-manager-test-preload-529794" [dca7b636-b988-48f4-a22a-02fc03f0fac9] Running
	I1119 23:29:33.175003  153310 system_pods.go:61] "kube-proxy-spxmx" [c48b124c-feac-4978-9753-db4dea33dc6f] Running
	I1119 23:29:33.175007  153310 system_pods.go:61] "kube-scheduler-test-preload-529794" [909c0ea6-b546-4b96-9a50-6c647726b439] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 23:29:33.175011  153310 system_pods.go:61] "storage-provisioner" [dc8c4178-c36c-4791-ac83-01eccaf76b04] Running
	I1119 23:29:33.175016  153310 system_pods.go:74] duration metric: took 4.145499ms to wait for pod list to return data ...
	I1119 23:29:33.175024  153310 default_sa.go:34] waiting for default service account to be created ...
	I1119 23:29:33.177523  153310 default_sa.go:45] found service account: "default"
	I1119 23:29:33.177541  153310 default_sa.go:55] duration metric: took 2.51229ms for default service account to be created ...
	I1119 23:29:33.177556  153310 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 23:29:33.181107  153310 system_pods.go:86] 7 kube-system pods found
	I1119 23:29:33.181127  153310 system_pods.go:89] "coredns-668d6bf9bc-z5jsw" [11f6de19-d210-4fd2-8c15-f89ec737004e] Running
	I1119 23:29:33.181135  153310 system_pods.go:89] "etcd-test-preload-529794" [2b0bad64-ebc5-4e77-a7ac-d6ca24613feb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 23:29:33.181142  153310 system_pods.go:89] "kube-apiserver-test-preload-529794" [9ef1b0ae-9f15-45c8-b381-eb39bcbecb7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 23:29:33.181148  153310 system_pods.go:89] "kube-controller-manager-test-preload-529794" [dca7b636-b988-48f4-a22a-02fc03f0fac9] Running
	I1119 23:29:33.181153  153310 system_pods.go:89] "kube-proxy-spxmx" [c48b124c-feac-4978-9753-db4dea33dc6f] Running
	I1119 23:29:33.181157  153310 system_pods.go:89] "kube-scheduler-test-preload-529794" [909c0ea6-b546-4b96-9a50-6c647726b439] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 23:29:33.181163  153310 system_pods.go:89] "storage-provisioner" [dc8c4178-c36c-4791-ac83-01eccaf76b04] Running
	I1119 23:29:33.181170  153310 system_pods.go:126] duration metric: took 3.609463ms to wait for k8s-apps to be running ...
	I1119 23:29:33.181179  153310 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 23:29:33.181234  153310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 23:29:33.199046  153310 system_svc.go:56] duration metric: took 17.857083ms WaitForService to wait for kubelet
	I1119 23:29:33.199077  153310 kubeadm.go:587] duration metric: took 8.864842478s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 23:29:33.199097  153310 node_conditions.go:102] verifying NodePressure condition ...
	I1119 23:29:33.202913  153310 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1119 23:29:33.202946  153310 node_conditions.go:123] node cpu capacity is 2
	I1119 23:29:33.202962  153310 node_conditions.go:105] duration metric: took 3.860193ms to run NodePressure ...
	I1119 23:29:33.202984  153310 start.go:242] waiting for startup goroutines ...
	I1119 23:29:33.203003  153310 start.go:247] waiting for cluster config update ...
	I1119 23:29:33.203031  153310 start.go:256] writing updated cluster config ...
	I1119 23:29:33.203405  153310 ssh_runner.go:195] Run: rm -f paused
	I1119 23:29:33.208766  153310 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 23:29:33.209279  153310 kapi.go:59] client config for test-preload-529794: &rest.Config{Host:"https://192.168.39.219:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/test-preload-529794/client.crt", KeyFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/profiles/test-preload-529794/client.key", CAFile:"/home/jenkins/minikube-integration/21918-117497/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1119 23:29:33.212397  153310 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-z5jsw" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:29:33.217519  153310 pod_ready.go:94] pod "coredns-668d6bf9bc-z5jsw" is "Ready"
	I1119 23:29:33.217549  153310 pod_ready.go:86] duration metric: took 5.122726ms for pod "coredns-668d6bf9bc-z5jsw" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:29:33.219543  153310 pod_ready.go:83] waiting for pod "etcd-test-preload-529794" in "kube-system" namespace to be "Ready" or be gone ...
	W1119 23:29:35.226128  153310 pod_ready.go:104] pod "etcd-test-preload-529794" is not "Ready", error: <nil>
	W1119 23:29:37.226550  153310 pod_ready.go:104] pod "etcd-test-preload-529794" is not "Ready", error: <nil>
	I1119 23:29:38.226382  153310 pod_ready.go:94] pod "etcd-test-preload-529794" is "Ready"
	I1119 23:29:38.226410  153310 pod_ready.go:86] duration metric: took 5.006841116s for pod "etcd-test-preload-529794" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:29:38.228357  153310 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-529794" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:29:38.232972  153310 pod_ready.go:94] pod "kube-apiserver-test-preload-529794" is "Ready"
	I1119 23:29:38.232999  153310 pod_ready.go:86] duration metric: took 4.616851ms for pod "kube-apiserver-test-preload-529794" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:29:38.235277  153310 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-529794" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:29:38.239720  153310 pod_ready.go:94] pod "kube-controller-manager-test-preload-529794" is "Ready"
	I1119 23:29:38.239741  153310 pod_ready.go:86] duration metric: took 4.443532ms for pod "kube-controller-manager-test-preload-529794" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:29:38.242192  153310 pod_ready.go:83] waiting for pod "kube-proxy-spxmx" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:29:38.423696  153310 pod_ready.go:94] pod "kube-proxy-spxmx" is "Ready"
	I1119 23:29:38.423724  153310 pod_ready.go:86] duration metric: took 181.512373ms for pod "kube-proxy-spxmx" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:29:38.624484  153310 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-529794" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:29:39.024150  153310 pod_ready.go:94] pod "kube-scheduler-test-preload-529794" is "Ready"
	I1119 23:29:39.024190  153310 pod_ready.go:86] duration metric: took 399.673078ms for pod "kube-scheduler-test-preload-529794" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 23:29:39.024210  153310 pod_ready.go:40] duration metric: took 5.815412156s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 23:29:39.067350  153310 start.go:628] kubectl: 1.34.2, cluster: 1.32.0 (minor skew: 2)
	I1119 23:29:39.068903  153310 out.go:203] 
	W1119 23:29:39.070049  153310 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.32.0.
	I1119 23:29:39.071221  153310 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1119 23:29:39.072344  153310 out.go:179] * Done! kubectl is now configured to use "test-preload-529794" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 23:29:39 test-preload-529794 crio[843]: time="2025-11-19 23:29:39.904386087Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763594979904365121,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141635,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=84074c41-b2d1-4974-a8a1-bd2b878e5714 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:29:39 test-preload-529794 crio[843]: time="2025-11-19 23:29:39.904900986Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7da1da44-af3e-48b1-8be7-bf82565669cb name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:29:39 test-preload-529794 crio[843]: time="2025-11-19 23:29:39.904957259Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7da1da44-af3e-48b1-8be7-bf82565669cb name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:29:39 test-preload-529794 crio[843]: time="2025-11-19 23:29:39.905510268Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f590babbc2ada9e96ce9bd53ccbd2730167f9fa7e79ce84f8589fd5502efc1e,PodSandboxId:c2ecff2802b6cdc7c6fbf0dff31e0c0ef2ce9baf4851bdc7c0774e5bd905b7b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763594971399730986,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-z5jsw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11f6de19-d210-4fd2-8c15-f89ec737004e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b31ff04c0db759fa2f12066a978ef8045e5df451713493d6de2184c826971f78,PodSandboxId:e2f08e9af235f5a486c57c1e404790936d52c8574d09e70a14ce1e4eb75f8a54,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763594963792512986,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-spxmx,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c48b124c-feac-4978-9753-db4dea33dc6f,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16745e6dea65b0f58c00e42ea7a9dc8544f0a6021f649a072058be6b96805d93,PodSandboxId:94ebd6557771d4d3bb630e0cf4b9f8f15537d84c1d18ab6f774a65cb64f5929d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763594963762246405,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc
8c4178-c36c-4791-ac83-01eccaf76b04,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dc3f00ca1bdfe6cd570deb5efc4823102d1a865d891e36789af05261f0b2736,PodSandboxId:4eef94ef1f799569ac452bbc9489e0bb28d282abeefeb84a82b60fde84a8c395,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763594959479373783,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-529794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a4ddf57656fb671a227a279f25cf5ad,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52c7df85fc0f4d76d62cbe82a468cd3bb0177a8d9c8605da155fa108bd78bfa4,PodSandboxId:970a8dccf88948af00d1e880df89036a00067cd262ebdadc43d8ad6dfb4e0215,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763594959458563666,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-529794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c16d4906f7bb594a5c29a2cc29256811,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70174630a4a2df1adba8553eb228ea5ce589098d3f00b06709183984e2ba4889,PodSandboxId:046b644dc5816fe17468f653b07b7c772f3a00a1c9453bad0802b909dec6da49,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763594959455138655,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-529794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1815f74d25b94d53a3c3327d6473f4,},Annotations:map[string]str
ing{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84ab7996537594a6bca7dfc691de09a723cbefdcdd2ffad666ff89a83f911e1d,PodSandboxId:2d38968e9929be86c420f7e4399d4750bc2e19bbb2281a88865030ba2bc08f46,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763594959368698283,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-529794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af706c9c997a00384bb9f21e27200375,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7da1da44-af3e-48b1-8be7-bf82565669cb name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:29:39 test-preload-529794 crio[843]: time="2025-11-19 23:29:39.949352529Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=07352780-eec7-49bd-b9eb-ca2ee9c0cd64 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:29:39 test-preload-529794 crio[843]: time="2025-11-19 23:29:39.949431926Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=07352780-eec7-49bd-b9eb-ca2ee9c0cd64 name=/runtime.v1.RuntimeService/Version
	Nov 19 23:29:39 test-preload-529794 crio[843]: time="2025-11-19 23:29:39.950957056Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bdb1c5f6-d5ac-461d-a6f5-a6fb70763914 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:29:39 test-preload-529794 crio[843]: time="2025-11-19 23:29:39.951383720Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763594979951362712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141635,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bdb1c5f6-d5ac-461d-a6f5-a6fb70763914 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:29:39 test-preload-529794 crio[843]: time="2025-11-19 23:29:39.952194358Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af61a149-aa49-41a4-9b30-6afdb8a73548 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:29:39 test-preload-529794 crio[843]: time="2025-11-19 23:29:39.952269656Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af61a149-aa49-41a4-9b30-6afdb8a73548 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:29:39 test-preload-529794 crio[843]: time="2025-11-19 23:29:39.952433546Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f590babbc2ada9e96ce9bd53ccbd2730167f9fa7e79ce84f8589fd5502efc1e,PodSandboxId:c2ecff2802b6cdc7c6fbf0dff31e0c0ef2ce9baf4851bdc7c0774e5bd905b7b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763594971399730986,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-z5jsw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11f6de19-d210-4fd2-8c15-f89ec737004e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b31ff04c0db759fa2f12066a978ef8045e5df451713493d6de2184c826971f78,PodSandboxId:e2f08e9af235f5a486c57c1e404790936d52c8574d09e70a14ce1e4eb75f8a54,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763594963792512986,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-spxmx,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c48b124c-feac-4978-9753-db4dea33dc6f,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16745e6dea65b0f58c00e42ea7a9dc8544f0a6021f649a072058be6b96805d93,PodSandboxId:94ebd6557771d4d3bb630e0cf4b9f8f15537d84c1d18ab6f774a65cb64f5929d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763594963762246405,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc
8c4178-c36c-4791-ac83-01eccaf76b04,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dc3f00ca1bdfe6cd570deb5efc4823102d1a865d891e36789af05261f0b2736,PodSandboxId:4eef94ef1f799569ac452bbc9489e0bb28d282abeefeb84a82b60fde84a8c395,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763594959479373783,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-529794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a4ddf57656fb671a227a279f25cf5ad,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52c7df85fc0f4d76d62cbe82a468cd3bb0177a8d9c8605da155fa108bd78bfa4,PodSandboxId:970a8dccf88948af00d1e880df89036a00067cd262ebdadc43d8ad6dfb4e0215,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763594959458563666,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-529794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c16d4906f7bb594a5c29a2cc29256811,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70174630a4a2df1adba8553eb228ea5ce589098d3f00b06709183984e2ba4889,PodSandboxId:046b644dc5816fe17468f653b07b7c772f3a00a1c9453bad0802b909dec6da49,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763594959455138655,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-529794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1815f74d25b94d53a3c3327d6473f4,},Annotations:map[string]str
ing{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84ab7996537594a6bca7dfc691de09a723cbefdcdd2ffad666ff89a83f911e1d,PodSandboxId:2d38968e9929be86c420f7e4399d4750bc2e19bbb2281a88865030ba2bc08f46,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763594959368698283,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-529794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af706c9c997a00384bb9f21e27200375,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=af61a149-aa49-41a4-9b30-6afdb8a73548 name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:29:39 test-preload-529794 crio[843]: time="2025-11-19 23:29:39.995325089Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5780d6f2-b568-4603-bc6d-e8e82909f60b name=/runtime.v1.RuntimeService/Version
	Nov 19 23:29:39 test-preload-529794 crio[843]: time="2025-11-19 23:29:39.995565921Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5780d6f2-b568-4603-bc6d-e8e82909f60b name=/runtime.v1.RuntimeService/Version
	Nov 19 23:29:39 test-preload-529794 crio[843]: time="2025-11-19 23:29:39.997038017Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=efa16582-7d29-489f-b05d-74ea8f3695df name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:29:39 test-preload-529794 crio[843]: time="2025-11-19 23:29:39.997996998Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763594979997973164,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141635,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=efa16582-7d29-489f-b05d-74ea8f3695df name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:29:39 test-preload-529794 crio[843]: time="2025-11-19 23:29:39.998490562Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94341e49-24b7-4e30-b927-b11bba7bc97f name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:29:39 test-preload-529794 crio[843]: time="2025-11-19 23:29:39.998543130Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94341e49-24b7-4e30-b927-b11bba7bc97f name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:29:39 test-preload-529794 crio[843]: time="2025-11-19 23:29:39.998693790Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f590babbc2ada9e96ce9bd53ccbd2730167f9fa7e79ce84f8589fd5502efc1e,PodSandboxId:c2ecff2802b6cdc7c6fbf0dff31e0c0ef2ce9baf4851bdc7c0774e5bd905b7b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763594971399730986,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-z5jsw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11f6de19-d210-4fd2-8c15-f89ec737004e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b31ff04c0db759fa2f12066a978ef8045e5df451713493d6de2184c826971f78,PodSandboxId:e2f08e9af235f5a486c57c1e404790936d52c8574d09e70a14ce1e4eb75f8a54,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763594963792512986,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-spxmx,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c48b124c-feac-4978-9753-db4dea33dc6f,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16745e6dea65b0f58c00e42ea7a9dc8544f0a6021f649a072058be6b96805d93,PodSandboxId:94ebd6557771d4d3bb630e0cf4b9f8f15537d84c1d18ab6f774a65cb64f5929d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763594963762246405,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc
8c4178-c36c-4791-ac83-01eccaf76b04,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dc3f00ca1bdfe6cd570deb5efc4823102d1a865d891e36789af05261f0b2736,PodSandboxId:4eef94ef1f799569ac452bbc9489e0bb28d282abeefeb84a82b60fde84a8c395,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763594959479373783,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-529794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a4ddf57656fb671a227a279f25cf5ad,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52c7df85fc0f4d76d62cbe82a468cd3bb0177a8d9c8605da155fa108bd78bfa4,PodSandboxId:970a8dccf88948af00d1e880df89036a00067cd262ebdadc43d8ad6dfb4e0215,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763594959458563666,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-529794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c16d4906f7bb594a5c29a2cc29256811,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70174630a4a2df1adba8553eb228ea5ce589098d3f00b06709183984e2ba4889,PodSandboxId:046b644dc5816fe17468f653b07b7c772f3a00a1c9453bad0802b909dec6da49,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763594959455138655,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-529794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1815f74d25b94d53a3c3327d6473f4,},Annotations:map[string]str
ing{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84ab7996537594a6bca7dfc691de09a723cbefdcdd2ffad666ff89a83f911e1d,PodSandboxId:2d38968e9929be86c420f7e4399d4750bc2e19bbb2281a88865030ba2bc08f46,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763594959368698283,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-529794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af706c9c997a00384bb9f21e27200375,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=94341e49-24b7-4e30-b927-b11bba7bc97f name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:29:40 test-preload-529794 crio[843]: time="2025-11-19 23:29:40.035118950Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e71ddea8-24cf-4251-abe4-a970254aa0ed name=/runtime.v1.RuntimeService/Version
	Nov 19 23:29:40 test-preload-529794 crio[843]: time="2025-11-19 23:29:40.035189669Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e71ddea8-24cf-4251-abe4-a970254aa0ed name=/runtime.v1.RuntimeService/Version
	Nov 19 23:29:40 test-preload-529794 crio[843]: time="2025-11-19 23:29:40.041307536Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f17169c5-b49a-4db6-b1da-915b2384c816 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:29:40 test-preload-529794 crio[843]: time="2025-11-19 23:29:40.042009354Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763594980041984318,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141635,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f17169c5-b49a-4db6-b1da-915b2384c816 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 19 23:29:40 test-preload-529794 crio[843]: time="2025-11-19 23:29:40.042532487Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ac0048d2-cb2b-4436-a4f2-e341033b863e name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:29:40 test-preload-529794 crio[843]: time="2025-11-19 23:29:40.042608610Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ac0048d2-cb2b-4436-a4f2-e341033b863e name=/runtime.v1.RuntimeService/ListContainers
	Nov 19 23:29:40 test-preload-529794 crio[843]: time="2025-11-19 23:29:40.042760137Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f590babbc2ada9e96ce9bd53ccbd2730167f9fa7e79ce84f8589fd5502efc1e,PodSandboxId:c2ecff2802b6cdc7c6fbf0dff31e0c0ef2ce9baf4851bdc7c0774e5bd905b7b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763594971399730986,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-z5jsw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11f6de19-d210-4fd2-8c15-f89ec737004e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b31ff04c0db759fa2f12066a978ef8045e5df451713493d6de2184c826971f78,PodSandboxId:e2f08e9af235f5a486c57c1e404790936d52c8574d09e70a14ce1e4eb75f8a54,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763594963792512986,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-spxmx,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c48b124c-feac-4978-9753-db4dea33dc6f,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16745e6dea65b0f58c00e42ea7a9dc8544f0a6021f649a072058be6b96805d93,PodSandboxId:94ebd6557771d4d3bb630e0cf4b9f8f15537d84c1d18ab6f774a65cb64f5929d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763594963762246405,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc
8c4178-c36c-4791-ac83-01eccaf76b04,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dc3f00ca1bdfe6cd570deb5efc4823102d1a865d891e36789af05261f0b2736,PodSandboxId:4eef94ef1f799569ac452bbc9489e0bb28d282abeefeb84a82b60fde84a8c395,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763594959479373783,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-529794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a4ddf57656fb671a227a279f25cf5ad,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52c7df85fc0f4d76d62cbe82a468cd3bb0177a8d9c8605da155fa108bd78bfa4,PodSandboxId:970a8dccf88948af00d1e880df89036a00067cd262ebdadc43d8ad6dfb4e0215,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763594959458563666,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-529794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c16d4906f7bb594a5c29a2cc29256811,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70174630a4a2df1adba8553eb228ea5ce589098d3f00b06709183984e2ba4889,PodSandboxId:046b644dc5816fe17468f653b07b7c772f3a00a1c9453bad0802b909dec6da49,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763594959455138655,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-529794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1815f74d25b94d53a3c3327d6473f4,},Annotations:map[string]str
ing{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84ab7996537594a6bca7dfc691de09a723cbefdcdd2ffad666ff89a83f911e1d,PodSandboxId:2d38968e9929be86c420f7e4399d4750bc2e19bbb2281a88865030ba2bc08f46,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763594959368698283,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-529794,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af706c9c997a00384bb9f21e27200375,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ac0048d2-cb2b-4436-a4f2-e341033b863e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0f590babbc2ad       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   8 seconds ago       Running             coredns                   1                   c2ecff2802b6c       coredns-668d6bf9bc-z5jsw
	b31ff04c0db75       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   16 seconds ago      Running             kube-proxy                1                   e2f08e9af235f       kube-proxy-spxmx
	16745e6dea65b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       1                   94ebd6557771d       storage-provisioner
	7dc3f00ca1bdf       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   20 seconds ago      Running             etcd                      1                   4eef94ef1f799       etcd-test-preload-529794
	52c7df85fc0f4       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   20 seconds ago      Running             kube-scheduler            1                   970a8dccf8894       kube-scheduler-test-preload-529794
	70174630a4a2d       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   20 seconds ago      Running             kube-apiserver            1                   046b644dc5816       kube-apiserver-test-preload-529794
	84ab799653759       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   20 seconds ago      Running             kube-controller-manager   1                   2d38968e9929b       kube-controller-manager-test-preload-529794
	
	
	==> coredns [0f590babbc2ada9e96ce9bd53ccbd2730167f9fa7e79ce84f8589fd5502efc1e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52661 - 44808 "HINFO IN 5390531990159652520.4709646183145897631. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.040299693s
	
	
	==> describe nodes <==
	Name:               test-preload-529794
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-529794
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=test-preload-529794
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T23_27_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 23:27:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-529794
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 23:29:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 23:29:32 +0000   Wed, 19 Nov 2025 23:27:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 23:29:32 +0000   Wed, 19 Nov 2025 23:27:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 23:29:32 +0000   Wed, 19 Nov 2025 23:27:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 23:29:32 +0000   Wed, 19 Nov 2025 23:29:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.219
	  Hostname:    test-preload-529794
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 987a228b72d146f996be834acb164560
	  System UUID:                987a228b-72d1-46f9-96be-834acb164560
	  Boot ID:                    070b734a-eb7b-49f7-8349-f27cd52b383d
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-z5jsw                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     101s
	  kube-system                 etcd-test-preload-529794                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         105s
	  kube-system                 kube-apiserver-test-preload-529794             250m (12%)    0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-controller-manager-test-preload-529794    200m (10%)    0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-proxy-spxmx                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-scheduler-test-preload-529794             100m (5%)     0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 98s                kube-proxy       
	  Normal   Starting                 16s                kube-proxy       
	  Normal   NodeHasSufficientMemory  105s               kubelet          Node test-preload-529794 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  105s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    105s               kubelet          Node test-preload-529794 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     105s               kubelet          Node test-preload-529794 status is now: NodeHasSufficientPID
	  Normal   Starting                 105s               kubelet          Starting kubelet.
	  Normal   NodeReady                104s               kubelet          Node test-preload-529794 status is now: NodeReady
	  Normal   RegisteredNode           102s               node-controller  Node test-preload-529794 event: Registered Node test-preload-529794 in Controller
	  Normal   Starting                 23s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node test-preload-529794 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node test-preload-529794 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node test-preload-529794 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 18s                kubelet          Node test-preload-529794 has been rebooted, boot id: 070b734a-eb7b-49f7-8349-f27cd52b383d
	  Normal   RegisteredNode           15s                node-controller  Node test-preload-529794 event: Registered Node test-preload-529794 in Controller
	
	
	==> dmesg <==
	[Nov19 23:28] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[Nov19 23:29] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.010651] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.955799] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.083125] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.096049] kauditd_printk_skb: 102 callbacks suppressed
	[  +6.476949] kauditd_printk_skb: 177 callbacks suppressed
	[  +0.000046] kauditd_printk_skb: 156 callbacks suppressed
	
	
	==> etcd [7dc3f00ca1bdfe6cd570deb5efc4823102d1a865d891e36789af05261f0b2736] <==
	{"level":"info","ts":"2025-11-19T23:29:19.996600Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-19T23:29:20.004379Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-19T23:29:20.004720Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"28ab8665a749e374","initial-advertise-peer-urls":["https://192.168.39.219:2380"],"listen-peer-urls":["https://192.168.39.219:2380"],"advertise-client-urls":["https://192.168.39.219:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.219:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-19T23:29:20.004764Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-19T23:29:19.996716Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-19T23:29:20.007785Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.219:2380"}
	{"level":"info","ts":"2025-11-19T23:29:20.010900Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.219:2380"}
	{"level":"info","ts":"2025-11-19T23:29:20.004845Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-19T23:29:20.011967Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-19T23:29:21.254237Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28ab8665a749e374 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-19T23:29:21.254277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28ab8665a749e374 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-19T23:29:21.254308Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28ab8665a749e374 received MsgPreVoteResp from 28ab8665a749e374 at term 2"}
	{"level":"info","ts":"2025-11-19T23:29:21.254321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28ab8665a749e374 became candidate at term 3"}
	{"level":"info","ts":"2025-11-19T23:29:21.254326Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28ab8665a749e374 received MsgVoteResp from 28ab8665a749e374 at term 3"}
	{"level":"info","ts":"2025-11-19T23:29:21.254334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28ab8665a749e374 became leader at term 3"}
	{"level":"info","ts":"2025-11-19T23:29:21.254340Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 28ab8665a749e374 elected leader 28ab8665a749e374 at term 3"}
	{"level":"info","ts":"2025-11-19T23:29:21.255938Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"28ab8665a749e374","local-member-attributes":"{Name:test-preload-529794 ClientURLs:[https://192.168.39.219:2379]}","request-path":"/0/members/28ab8665a749e374/attributes","cluster-id":"14fc06d09ccfd789","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-19T23:29:21.255973Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T23:29:21.256044Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T23:29:21.256420Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-19T23:29:21.256461Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-19T23:29:21.257117Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-19T23:29:21.257134Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-19T23:29:21.257791Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.219:2379"}
	{"level":"info","ts":"2025-11-19T23:29:21.257812Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 23:29:40 up 0 min,  0 users,  load average: 1.26, 0.39, 0.14
	Linux test-preload-529794 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 21:15:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [70174630a4a2df1adba8553eb228ea5ce589098d3f00b06709183984e2ba4889] <==
	I1119 23:29:22.521188       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 23:29:22.522402       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 23:29:22.524447       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1119 23:29:22.525665       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1119 23:29:22.525713       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1119 23:29:22.525827       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1119 23:29:22.530403       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1119 23:29:22.531014       1 shared_informer.go:320] Caches are synced for configmaps
	I1119 23:29:22.533007       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1119 23:29:22.547450       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1119 23:29:22.549420       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1119 23:29:22.549531       1 aggregator.go:171] initial CRD sync complete...
	I1119 23:29:22.549558       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 23:29:22.549619       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 23:29:22.549640       1 cache.go:39] Caches are synced for autoregister controller
	I1119 23:29:22.567972       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1119 23:29:23.337572       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 23:29:23.419235       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1119 23:29:24.153315       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1119 23:29:24.224705       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1119 23:29:24.268621       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 23:29:24.281065       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 23:29:25.920181       1 controller.go:615] quota admission added evaluator for: endpoints
	I1119 23:29:25.972145       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 23:29:26.073301       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [84ab7996537594a6bca7dfc691de09a723cbefdcdd2ffad666ff89a83f911e1d] <==
	I1119 23:29:25.602255       1 shared_informer.go:320] Caches are synced for PV protection
	I1119 23:29:25.605401       1 shared_informer.go:320] Caches are synced for resource quota
	I1119 23:29:25.609571       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1119 23:29:25.615186       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1119 23:29:25.617239       1 shared_informer.go:320] Caches are synced for attach detach
	I1119 23:29:25.618192       1 shared_informer.go:320] Caches are synced for HPA
	I1119 23:29:25.618725       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1119 23:29:25.621074       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1119 23:29:25.625053       1 shared_informer.go:320] Caches are synced for crt configmap
	I1119 23:29:25.635751       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1119 23:29:25.638547       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1119 23:29:25.646927       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-529794"
	I1119 23:29:25.647763       1 shared_informer.go:320] Caches are synced for GC
	I1119 23:29:25.710088       1 shared_informer.go:320] Caches are synced for garbage collector
	I1119 23:29:25.716323       1 shared_informer.go:320] Caches are synced for garbage collector
	I1119 23:29:25.716362       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 23:29:25.716369       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 23:29:26.081786       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="462.85229ms"
	I1119 23:29:26.081928       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="104.336ยตs"
	I1119 23:29:31.554144       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="45.027ยตs"
	I1119 23:29:31.589711       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="15.439026ms"
	I1119 23:29:31.591615       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="102.474ยตs"
	I1119 23:29:32.681261       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-529794"
	I1119 23:29:32.694534       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-529794"
	I1119 23:29:35.579499       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [b31ff04c0db759fa2f12066a978ef8045e5df451713493d6de2184c826971f78] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1119 23:29:24.025827       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1119 23:29:24.039936       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.219"]
	E1119 23:29:24.040194       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 23:29:24.100022       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1119 23:29:24.100173       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1119 23:29:24.100295       1 server_linux.go:170] "Using iptables Proxier"
	I1119 23:29:24.110327       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 23:29:24.110672       1 server.go:497] "Version info" version="v1.32.0"
	I1119 23:29:24.110688       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 23:29:24.120686       1 config.go:199] "Starting service config controller"
	I1119 23:29:24.120713       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1119 23:29:24.120833       1 config.go:105] "Starting endpoint slice config controller"
	I1119 23:29:24.120840       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1119 23:29:24.124982       1 config.go:329] "Starting node config controller"
	I1119 23:29:24.125076       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1119 23:29:24.221219       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1119 23:29:24.221408       1 shared_informer.go:320] Caches are synced for service config
	I1119 23:29:24.225184       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [52c7df85fc0f4d76d62cbe82a468cd3bb0177a8d9c8605da155fa108bd78bfa4] <==
	I1119 23:29:20.705014       1 serving.go:386] Generated self-signed cert in-memory
	W1119 23:29:22.408286       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1119 23:29:22.408395       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1119 23:29:22.408601       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1119 23:29:22.409037       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1119 23:29:22.517373       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1119 23:29:22.517478       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 23:29:22.526960       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 23:29:22.527011       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1119 23:29:22.527333       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1119 23:29:22.527420       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 23:29:22.628369       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 19 23:29:22 test-preload-529794 kubelet[1173]: E1119 23:29:22.606172    1173 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-529794\" already exists" pod="kube-system/kube-scheduler-test-preload-529794"
	Nov 19 23:29:22 test-preload-529794 kubelet[1173]: I1119 23:29:22.606206    1173 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-529794"
	Nov 19 23:29:22 test-preload-529794 kubelet[1173]: I1119 23:29:22.609116    1173 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 23:29:22 test-preload-529794 kubelet[1173]: I1119 23:29:22.610149    1173 setters.go:602] "Node became not ready" node="test-preload-529794" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-19T23:29:22Z","lastTransitionTime":"2025-11-19T23:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Nov 19 23:29:22 test-preload-529794 kubelet[1173]: E1119 23:29:22.632769    1173 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-529794\" already exists" pod="kube-system/etcd-test-preload-529794"
	Nov 19 23:29:23 test-preload-529794 kubelet[1173]: I1119 23:29:23.312484    1173 apiserver.go:52] "Watching apiserver"
	Nov 19 23:29:23 test-preload-529794 kubelet[1173]: E1119 23:29:23.320328    1173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-z5jsw" podUID="11f6de19-d210-4fd2-8c15-f89ec737004e"
	Nov 19 23:29:23 test-preload-529794 kubelet[1173]: I1119 23:29:23.334680    1173 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Nov 19 23:29:23 test-preload-529794 kubelet[1173]: I1119 23:29:23.415650    1173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c48b124c-feac-4978-9753-db4dea33dc6f-xtables-lock\") pod \"kube-proxy-spxmx\" (UID: \"c48b124c-feac-4978-9753-db4dea33dc6f\") " pod="kube-system/kube-proxy-spxmx"
	Nov 19 23:29:23 test-preload-529794 kubelet[1173]: I1119 23:29:23.415685    1173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c48b124c-feac-4978-9753-db4dea33dc6f-lib-modules\") pod \"kube-proxy-spxmx\" (UID: \"c48b124c-feac-4978-9753-db4dea33dc6f\") " pod="kube-system/kube-proxy-spxmx"
	Nov 19 23:29:23 test-preload-529794 kubelet[1173]: I1119 23:29:23.415721    1173 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/dc8c4178-c36c-4791-ac83-01eccaf76b04-tmp\") pod \"storage-provisioner\" (UID: \"dc8c4178-c36c-4791-ac83-01eccaf76b04\") " pod="kube-system/storage-provisioner"
	Nov 19 23:29:23 test-preload-529794 kubelet[1173]: E1119 23:29:23.416163    1173 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 19 23:29:23 test-preload-529794 kubelet[1173]: E1119 23:29:23.416226    1173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/11f6de19-d210-4fd2-8c15-f89ec737004e-config-volume podName:11f6de19-d210-4fd2-8c15-f89ec737004e nodeName:}" failed. No retries permitted until 2025-11-19 23:29:23.916206144 +0000 UTC m=+6.702681049 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/11f6de19-d210-4fd2-8c15-f89ec737004e-config-volume") pod "coredns-668d6bf9bc-z5jsw" (UID: "11f6de19-d210-4fd2-8c15-f89ec737004e") : object "kube-system"/"coredns" not registered
	Nov 19 23:29:23 test-preload-529794 kubelet[1173]: E1119 23:29:23.921083    1173 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 19 23:29:23 test-preload-529794 kubelet[1173]: E1119 23:29:23.921190    1173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/11f6de19-d210-4fd2-8c15-f89ec737004e-config-volume podName:11f6de19-d210-4fd2-8c15-f89ec737004e nodeName:}" failed. No retries permitted until 2025-11-19 23:29:24.921173189 +0000 UTC m=+7.707648097 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/11f6de19-d210-4fd2-8c15-f89ec737004e-config-volume") pod "coredns-668d6bf9bc-z5jsw" (UID: "11f6de19-d210-4fd2-8c15-f89ec737004e") : object "kube-system"/"coredns" not registered
	Nov 19 23:29:24 test-preload-529794 kubelet[1173]: E1119 23:29:24.929795    1173 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 19 23:29:24 test-preload-529794 kubelet[1173]: E1119 23:29:24.929940    1173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/11f6de19-d210-4fd2-8c15-f89ec737004e-config-volume podName:11f6de19-d210-4fd2-8c15-f89ec737004e nodeName:}" failed. No retries permitted until 2025-11-19 23:29:26.929926151 +0000 UTC m=+9.716401072 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/11f6de19-d210-4fd2-8c15-f89ec737004e-config-volume") pod "coredns-668d6bf9bc-z5jsw" (UID: "11f6de19-d210-4fd2-8c15-f89ec737004e") : object "kube-system"/"coredns" not registered
	Nov 19 23:29:25 test-preload-529794 kubelet[1173]: E1119 23:29:25.354580    1173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-z5jsw" podUID="11f6de19-d210-4fd2-8c15-f89ec737004e"
	Nov 19 23:29:26 test-preload-529794 kubelet[1173]: E1119 23:29:26.944338    1173 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 19 23:29:26 test-preload-529794 kubelet[1173]: E1119 23:29:26.944407    1173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/11f6de19-d210-4fd2-8c15-f89ec737004e-config-volume podName:11f6de19-d210-4fd2-8c15-f89ec737004e nodeName:}" failed. No retries permitted until 2025-11-19 23:29:30.944394357 +0000 UTC m=+13.730869261 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/11f6de19-d210-4fd2-8c15-f89ec737004e-config-volume") pod "coredns-668d6bf9bc-z5jsw" (UID: "11f6de19-d210-4fd2-8c15-f89ec737004e") : object "kube-system"/"coredns" not registered
	Nov 19 23:29:27 test-preload-529794 kubelet[1173]: E1119 23:29:27.357514    1173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-z5jsw" podUID="11f6de19-d210-4fd2-8c15-f89ec737004e"
	Nov 19 23:29:27 test-preload-529794 kubelet[1173]: E1119 23:29:27.422792    1173 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763594967422296659,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141635,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 19 23:29:27 test-preload-529794 kubelet[1173]: E1119 23:29:27.422844    1173 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763594967422296659,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141635,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 19 23:29:37 test-preload-529794 kubelet[1173]: E1119 23:29:37.425173    1173 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763594977424498882,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141635,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 19 23:29:37 test-preload-529794 kubelet[1173]: E1119 23:29:37.425813    1173 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763594977424498882,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141635,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [16745e6dea65b0f58c00e42ea7a9dc8544f0a6021f649a072058be6b96805d93] <==
	I1119 23:29:23.881073       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-529794 -n test-preload-529794
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-529794 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-529794" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-529794
--- FAIL: TestPreload (159.28s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-039657 ssh "which crictl"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-039657 ssh "which crictl": context deadline exceeded (1.396ยตs)
iso_test.go:78: failed to verify existence of "crictl" binary : args "out/minikube-linux-amd64 -p guest-039657 ssh \"which crictl\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/crictl (0.00s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-039657 ssh "which curl"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-039657 ssh "which curl": context deadline exceeded (215ns)
iso_test.go:78: failed to verify existence of "curl" binary : args "out/minikube-linux-amd64 -p guest-039657 ssh \"which curl\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/curl (0.00s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-039657 ssh "which docker"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-039657 ssh "which docker": context deadline exceeded (270ns)
iso_test.go:78: failed to verify existence of "docker" binary : args "out/minikube-linux-amd64 -p guest-039657 ssh \"which docker\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/docker (0.00s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-039657 ssh "which git"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-039657 ssh "which git": context deadline exceeded (531ns)
iso_test.go:78: failed to verify existence of "git" binary : args "out/minikube-linux-amd64 -p guest-039657 ssh \"which git\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/git (0.00s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-039657 ssh "which iptables"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-039657 ssh "which iptables": context deadline exceeded (300ns)
iso_test.go:78: failed to verify existence of "iptables" binary : args "out/minikube-linux-amd64 -p guest-039657 ssh \"which iptables\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/iptables (0.00s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-039657 ssh "which podman"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-039657 ssh "which podman": context deadline exceeded (317ns)
iso_test.go:78: failed to verify existence of "podman" binary : args "out/minikube-linux-amd64 -p guest-039657 ssh \"which podman\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/podman (0.00s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-039657 ssh "which rsync"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-039657 ssh "which rsync": context deadline exceeded (317ns)
iso_test.go:78: failed to verify existence of "rsync" binary : args "out/minikube-linux-amd64 -p guest-039657 ssh \"which rsync\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/rsync (0.00s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-039657 ssh "which socat"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-039657 ssh "which socat": context deadline exceeded (385ns)
iso_test.go:78: failed to verify existence of "socat" binary : args "out/minikube-linux-amd64 -p guest-039657 ssh \"which socat\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/socat (0.00s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-039657 ssh "which wget"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-039657 ssh "which wget": context deadline exceeded (214ns)
iso_test.go:78: failed to verify existence of "wget" binary : args "out/minikube-linux-amd64 -p guest-039657 ssh \"which wget\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/wget (0.00s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-039657 ssh "which VBoxControl"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-039657 ssh "which VBoxControl": context deadline exceeded (316ns)
iso_test.go:78: failed to verify existence of "VBoxControl" binary : args "out/minikube-linux-amd64 -p guest-039657 ssh \"which VBoxControl\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/VBoxControl (0.00s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-039657 ssh "which VBoxService"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-039657 ssh "which VBoxService": context deadline exceeded (242ns)
iso_test.go:78: failed to verify existence of "VBoxService" binary : args "out/minikube-linux-amd64 -p guest-039657 ssh \"which VBoxService\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/VBoxService (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-039657 ssh "df -t ext4 /data | grep /data"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-039657 ssh "df -t ext4 /data | grep /data": context deadline exceeded (778ns)
iso_test.go:99: failed to verify existence of "/data" mount. args "out/minikube-linux-amd64 -p guest-039657 ssh \"df -t ext4 /data | grep /data\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//data (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-039657 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-039657 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker": context deadline exceeded (304ns)
iso_test.go:99: failed to verify existence of "/var/lib/docker" mount. args "out/minikube-linux-amd64 -p guest-039657 ssh \"df -t ext4 /var/lib/docker | grep /var/lib/docker\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/docker (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-039657 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-039657 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni": context deadline exceeded (274ns)
iso_test.go:99: failed to verify existence of "/var/lib/cni" mount. args "out/minikube-linux-amd64 -p guest-039657 ssh \"df -t ext4 /var/lib/cni | grep /var/lib/cni\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/cni (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-039657 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-039657 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet": context deadline exceeded (234ns)
iso_test.go:99: failed to verify existence of "/var/lib/kubelet" mount. args "out/minikube-linux-amd64 -p guest-039657 ssh \"df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/kubelet (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-039657 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-039657 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube": context deadline exceeded (366ns)
iso_test.go:99: failed to verify existence of "/var/lib/minikube" mount. args "out/minikube-linux-amd64 -p guest-039657 ssh \"df -t ext4 /var/lib/minikube | grep /var/lib/minikube\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/minikube (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-039657 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-039657 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox": context deadline exceeded (385ns)
iso_test.go:99: failed to verify existence of "/var/lib/toolbox" mount. args "out/minikube-linux-amd64 -p guest-039657 ssh \"df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/toolbox (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-039657 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-039657 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker": context deadline exceeded (294ns)
iso_test.go:99: failed to verify existence of "/var/lib/boot2docker" mount. args "out/minikube-linux-amd64 -p guest-039657 ssh \"df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/boot2docker (0.00s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-039657 ssh "cat /version.json"
iso_test.go:106: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-039657 ssh "cat /version.json": context deadline exceeded (372ns)
iso_test.go:108: failed to read /version.json. args "out/minikube-linux-amd64 -p guest-039657 ssh \"cat /version.json\"": context deadline exceeded
--- FAIL: TestISOImage/VersionJSON (0.00s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (7200.063s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-039657 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
iso_test.go:125: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-039657 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'": context deadline exceeded (645ns)
iso_test.go:127: failed to verify existence of "/sys/kernel/btf/vmlinux" file: args "out/minikube-linux-amd64 -p guest-039657 ssh \"test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'\"": context deadline exceeded
iso_test.go:131: expected file "/sys/kernel/btf/vmlinux" to exist, but it does not. BTF types are required for CO-RE eBPF programs; set CONFIG_DEBUG_INFO_BTF in kernel configuration.
--- FAIL: TestISOImage/eBPFSupport (0.00s)
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (15m23s)
		TestNetworkPlugins/group/auto (10m45s)
		TestNetworkPlugins/group/auto/Start (10m45s)
		TestStartStop (14m2s)
		TestStartStop/group/default-k8s-diff-port (1m46s)
		TestStartStop/group/default-k8s-diff-port/serial (1m46s)
		TestStartStop/group/default-k8s-diff-port/serial/Stop (11s)
		TestStartStop/group/newest-cni (7s)
		TestStartStop/group/newest-cni/serial (7s)
		TestStartStop/group/newest-cni/serial/FirstStart (7s)

                                                
                                                
goroutine 3161 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2484 +0x394
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1753 +0x49b
testing.tRunner(0xc000582700, 0xc00073bbb8)
	/usr/local/go/src/testing/testing.go:1798 +0x12d
testing.runTests(0xc0000100f0, {0x5c342e0, 0x2c, 0x2c}, {0xffffffffffffffff?, 0xc00025b790?, 0x5c5ca00?})
	/usr/local/go/src/testing/testing.go:2277 +0x4b4
testing.(*M).Run(0xc00069b9a0)
	/usr/local/go/src/testing/testing.go:2142 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc00069b9a0)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:64 +0x105
main.main()
	_testmain.go:133 +0xa8

                                                
                                                
goroutine 1784 [chan receive, 2 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1753 +0x49b
testing.tRunner(0xc001452fc0, 0x3c23838)
	/usr/local/go/src/testing/testing.go:1798 +0x12d
created by testing.(*T).Run in goroutine 1534
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 136 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc000824f90, 0x2d)
	/usr/local/go/src/runtime/sema.go:597 +0x159
sync.(*Cond).Wait(0xc0014b9ce0?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3f96240)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0007fe9c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x512aec8?, 0x5a97220?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x13
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x3f806d0?, 0xc0000844d0?}, 0x41b1b4?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x51
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x3f806d0, 0xc0000844d0}, 0xc0014b9f50, {0x3f37720, 0xc0015544b0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xe5
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000208fc0?, {0x3f37720?, 0xc0015544b0?}, 0xc0?, 0x55d160?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x46
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008ba380, 0x3b9aca00, 0x0, 0x1, 0xc0000844d0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 147
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x1d9

                                                
                                                
goroutine 126 [select, 117 minutes]:
net/http.(*persistConn).readLoop(0xc000ac39e0)
	/usr/local/go/src/net/http/transport.go:2395 +0xc5f
created by net/http.(*Transport).dialConn in goroutine 155
	/usr/local/go/src/net/http/transport.go:1944 +0x174f

                                                
                                                
goroutine 138 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 137
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3075 [IO wait]:
internal/poll.runtime_pollWait(0x72fd865a3ba0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0017f8300?, 0xc0014f74bf?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0017f8300, {0xc0014f74bf, 0xb41, 0xb41})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000536648, {0xc0014f74bf?, 0x41ab46?, 0xc001543dd8?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc001df34a0, {0x3f35b20, 0xc0003ba078})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f35ca0, 0xc001df34a0}, {0x3f35b20, 0xc0003ba078}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000536648?, {0x3f35ca0, 0xc001df34a0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc000536648, {0x3f35ca0, 0xc001df34a0})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3f35ca0, 0xc001df34a0}, {0x3f35ba0, 0xc000536648}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc001560000?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3041
	/usr/local/go/src/os/exec/exec.go:748 +0x92b

                                                
                                                
goroutine 1926 [syscall, 11 minutes]:
syscall.Syscall6(0xf7, 0x3, 0x13, 0xc0013dac58, 0x4, 0xc001766360, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:95 +0x39
internal/syscall/unix.Waitid(0xc0013dac86?, 0xc0013dadb0?, 0x5930ab?, 0x7ffcce7ac1d8?, 0x0?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x39
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:106
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:251
os.(*Process).pidfdWait(0xc00187e168?)
	/usr/local/go/src/os/pidfd_linux.go:105 +0x209
os.(*Process).wait(0xc000580008?)
	/usr/local/go/src/os/exec_unix.go:27 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc000256600)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc000256600)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc001453dc0, 0xc000256600)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc001453dc0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:112 +0x4b
testing.tRunner(0xc001453dc0, 0xc001606300)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1570
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 129 [select, 117 minutes]:
net/http.(*persistConn).writeLoop(0xc0016ae000)
	/usr/local/go/src/net/http/transport.go:2590 +0xe7
created by net/http.(*Transport).dialConn in goroutine 143
	/usr/local/go/src/net/http/transport.go:1945 +0x17a5

                                                
                                                
goroutine 146 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3f92dc0, {{0x3f87de8, 0xc0002483c0?}, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x378
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 110
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x272

                                                
                                                
goroutine 127 [select, 117 minutes]:
net/http.(*persistConn).writeLoop(0xc000ac39e0)
	/usr/local/go/src/net/http/transport.go:2590 +0xe7
created by net/http.(*Transport).dialConn in goroutine 155
	/usr/local/go/src/net/http/transport.go:1945 +0x17a5

                                                
                                                
goroutine 1570 [chan receive, 11 minutes]:
testing.(*T).Run(0xc000208540, {0x31f64cc?, 0x3f2c110?}, 0xc001606300)
	/usr/local/go/src/testing/testing.go:1859 +0x431
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000208540)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:111 +0x5bb
testing.tRunner(0xc000208540, 0xc00197a080)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1553
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 128 [select, 117 minutes]:
net/http.(*persistConn).readLoop(0xc0016ae000)
	/usr/local/go/src/net/http/transport.go:2395 +0xc5f
created by net/http.(*Transport).dialConn in goroutine 143
	/usr/local/go/src/net/http/transport.go:1944 +0x174f

                                                
                                                
goroutine 147 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0xc0007fe9c0, 0xc0000844d0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x295
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 110
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x614

                                                
                                                
goroutine 137 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3f806d0, 0xc0000844d0}, 0xc000509750, 0xc0014a4f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3f806d0, 0xc0000844d0}, 0xa0?, 0xc000509750, 0xc000509798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3f806d0?, 0xc0000844d0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0005097d0?, 0x5932a4?, 0xc00090e230?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 147
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x286

                                                
                                                
goroutine 2013 [chan receive, 8 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0xc001b1b2c0, 0xc0000844d0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x295
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2011
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x614

                                                
                                                
goroutine 2150 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3f92dc0, {{0x3f87de8, 0xc0002483c0?}, 0xc00014a480?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x378
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2149
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x272

                                                
                                                
goroutine 1929 [select, 11 minutes]:
os/exec.(*Cmd).watchCtx(0xc000256600, 0xc0000851f0)
	/usr/local/go/src/os/exec/exec.go:789 +0xb2
created by os/exec.(*Cmd).Start in goroutine 1926
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 3074 [IO wait]:
internal/poll.runtime_pollWait(0x72fdee6343f0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0017f8240?, 0xc0017fa834?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0017f8240, {0xc0017fa834, 0x3cc, 0x3cc})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0005365b8, {0xc0017fa834?, 0x41ab46?, 0x72fdee7bea78?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc001df3470, {0x3f35b20, 0xc0003ba070})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f35ca0, 0xc001df3470}, {0x3f35b20, 0xc0003ba070}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0005365b8?, {0x3f35ca0, 0xc001df3470})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc0005365b8, {0x3f35ca0, 0xc001df3470})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3f35ca0, 0xc001df3470}, {0x3f35ba0, 0xc0005365b8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc0002e1b00?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3041
	/usr/local/go/src/os/exec/exec.go:748 +0x92b

                                                
                                                
goroutine 2460 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3f806d0, 0xc0000844d0}, 0xc00050b750, 0xc00050b798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3f806d0, 0xc0000844d0}, 0xa0?, 0xc00050b750, 0xc00050b798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3f806d0?, 0xc0000844d0?}, 0x0?, 0x55d160?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00050b7d0?, 0x5932a4?, 0xc000ab9080?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 2516
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x286

                                                
                                                
goroutine 2762 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3f806d0, 0xc0000844d0}, 0xc000506f50, 0xc000506f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3f806d0, 0xc0000844d0}, 0x90?, 0xc000506f50, 0xc000506f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3f806d0?, 0xc0000844d0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000506fd0?, 0x5932a4?, 0xc0002e0301?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 2780
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x286

                                                
                                                
goroutine 2601 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc000825810, 0x0)
	/usr/local/go/src/runtime/sema.go:597 +0x159
sync.(*Cond).Wait(0xc001414ce0?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3f96240)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0014e5680)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4c5c93?, 0xc001eb6300?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x13
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x3f806d0?, 0xc0000844d0?}, 0x41b1b4?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x51
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x3f806d0, 0xc0000844d0}, 0xc001414f50, {0x3f37720, 0xc0015329f0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xe5
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x3f37720?, 0xc0015329f0?}, 0x0?, 0xc000256780?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x46
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000112f10, 0x3b9aca00, 0x0, 0x1, 0xc0000844d0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 2648
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x1d9

                                                
                                                
goroutine 2516 [chan receive, 6 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0xc001b1b5c0, 0xc0000844d0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x295
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2514
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x614

                                                
                                                
goroutine 3076 [select]:
os/exec.(*Cmd).watchCtx(0xc0017d6a80, 0xc001f19730)
	/usr/local/go/src/os/exec/exec.go:789 +0xb2
created by os/exec.(*Cmd).Start in goroutine 3041
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 2983 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2982
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2012 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3f92dc0, {{0x3f87de8, 0xc0002483c0?}, 0xc000085e30?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x378
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2011
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x272

                                                
                                                
goroutine 2982 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3f806d0, 0xc0000844d0}, 0xc0014bff50, 0xc0014bff98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3f806d0, 0xc0000844d0}, 0xa0?, 0xc0014bff50, 0xc0014bff98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3f806d0?, 0xc0000844d0?}, 0x0?, 0xc001b73080?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0015467d0?, 0x5932a4?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 2951
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x286

                                                
                                                
goroutine 2218 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3f806d0, 0xc0000844d0}, 0xc000508f50, 0xc000508f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3f806d0, 0xc0000844d0}, 0x90?, 0xc000508f50, 0xc000508f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3f806d0?, 0xc0000844d0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000508fd0?, 0x5932a4?, 0xc0017fe1c0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 2191
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x286

                                                
                                                
goroutine 1953 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3f806d0, 0xc0000844d0}, 0xc001922f50, 0xc001922f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3f806d0, 0xc0000844d0}, 0x30?, 0xc001922f50, 0xc001922f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3f806d0?, 0xc0000844d0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001922fd0?, 0x5932a4?, 0xc001e2c930?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 2013
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x286

                                                
                                                
goroutine 3136 [syscall]:
syscall.Syscall6(0xf7, 0x3, 0x19, 0xc0000cfb68, 0x4, 0xc0014af290, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:95 +0x39
internal/syscall/unix.Waitid(0xc0000cfb96?, 0xc0000cfcc0?, 0x5930ab?, 0x7ffcce7ac1d8?, 0x0?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x39
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:106
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:251
os.(*Process).pidfdWait(0xc0015642b8?)
	/usr/local/go/src/os/pidfd_linux.go:105 +0x209
os.(*Process).wait(0xc000600008?)
	/usr/local/go/src/os/exec_unix.go:27 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc001729980)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc001729980)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc001ea5340, 0xc001729980)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateFirstStart({0x3f80350?, 0xc0019ca0e0?}, 0xc001ea5340, {0xc001479008?, 0x0?}, {0xc001460f50?, 0xc001460f60?}, {0x55c773?, 0x4b6a93?}, {0xc002069800, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:184 +0xc5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001ea5340)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x66
testing.tRunner(0xc001ea5340, 0xc000115300)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 3135
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 648 [IO wait, 112 minutes]:
internal/poll.runtime_pollWait(0x72fdee634508, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc00197a100?, 0x900000036?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc00197a100)
	/usr/local/go/src/internal/poll/fd_unix.go:620 +0x295
net.(*netFD).accept(0xc00197a100)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc000ab4300)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1b
net.(*TCPListener).Accept(0xc000ab4300)
	/usr/local/go/src/net/tcpsock.go:380 +0x30
net/http.(*Server).Serve(0xc0001ffa00, {0x3f6dc30, 0xc000ab4300})
	/usr/local/go/src/net/http/server.go:3424 +0x30c
net/http.(*Server).ListenAndServe(0xc0001ffa00)
	/usr/local/go/src/net/http/server.go:3350 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 645
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x129

                                                
                                                
goroutine 2151 [chan receive, 7 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0xc0014e4ea0, 0xc0000844d0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x295
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2149
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x614

                                                
                                                
goroutine 2393 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3f92dc0, {{0x3f87de8, 0xc0002483c0?}, 0xc001532990?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x378
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2389
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x272

                                                
                                                
goroutine 2219 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2218
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2461 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2460
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2190 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3f92dc0, {{0x3f87de8, 0xc0002483c0?}, 0xc0020423f0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x378
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2158
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x272

                                                
                                                
goroutine 3139 [select]:
os/exec.(*Cmd).watchCtx(0xc001729980, 0xc0016a3a40)
	/usr/local/go/src/os/exec/exec.go:789 +0xb2
created by os/exec.(*Cmd).Start in goroutine 3136
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 2459 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0008de510, 0xd)
	/usr/local/go/src/runtime/sema.go:597 +0x159
sync.(*Cond).Wait(0xc001521ce0?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3f96240)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001b1b5c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0xc000ab97a0?, 0xc001de0ae0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x13
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x3f806d0?, 0xc0000844d0?}, 0x41b1b4?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x51
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x3f806d0, 0xc0000844d0}, 0xc001521f50, {0x3f37720, 0xc000ab9950}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xe5
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x3f37720?, 0xc000ab9950?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x46
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001f0c330, 0x3b9aca00, 0x0, 0x1, 0xc0000844d0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 2516
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x1d9

                                                
                                                
goroutine 2779 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3f92dc0, {{0x3f87de8, 0xc0002483c0?}, 0xc001462fa8?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x378
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2798
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x272

                                                
                                                
goroutine 1952 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0008dea50, 0x10)
	/usr/local/go/src/runtime/sema.go:597 +0x159
sync.(*Cond).Wait(0xc0013eece0?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3f96240)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001b1b2c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4c5c93?, 0xc001b1a6c0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x13
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x3f806d0?, 0xc0000844d0?}, 0x41b1b4?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x51
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x3f806d0, 0xc0000844d0}, 0xc0013eef50, {0x3f37720, 0xc001594300}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xe5
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x3f37720?, 0xc001594300?}, 0x0?, 0x55d160?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x46
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001906220, 0x3b9aca00, 0x0, 0x1, 0xc0000844d0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 2013
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x1d9

                                                
                                                
goroutine 3138 [IO wait]:
internal/poll.runtime_pollWait(0x72fdee6341c0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001759200?, 0xc001950cd4?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001759200, {0xc001950cd4, 0x132c, 0x132c})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0007806a0, {0xc001950cd4?, 0x41ab46?, 0xc001f55e78?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc0014c9170, {0x3f35b20, 0xc0005367e0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f35ca0, 0xc0014c9170}, {0x3f35b20, 0xc0005367e0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0007806a0?, {0x3f35ca0, 0xc0014c9170})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc0007806a0, {0x3f35ca0, 0xc0014c9170})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3f35ca0, 0xc0014c9170}, {0x3f35ba0, 0xc0007806a0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc0017ffe30?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3136
	/usr/local/go/src/os/exec/exec.go:748 +0x92b

                                                
                                                
goroutine 1534 [chan receive, 15 minutes]:
testing.(*T).Run(0xc001404e00, {0x31f64c7?, 0x55c773?}, 0x3c23838)
	/usr/local/go/src/testing/testing.go:1859 +0x431
k8s.io/minikube/test/integration.TestStartStop(0xc001404e00)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc001404e00, 0x3c23648)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 3023 [IO wait]:
internal/poll.runtime_pollWait(0x72fdee634a80, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001561b00?, 0xc00154a000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001561b00, {0xc00154a000, 0x1800, 0x1800})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc001561b00, {0xc00154a000?, 0xc00154a000?, 0x5?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc000780190, {0xc00154a000?, 0x72fd8667d058?, 0x72fdee7bea78?})
	/usr/local/go/src/net/net.go:194 +0x45
crypto/tls.(*atLeastReader).Read(0xc001da6b28, {0xc00154a000?, 0x17fb?, 0x3?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc0014d49b8, {0x3f37dc0, 0xc001da6b28})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0014d4708, {0x3f37200, 0xc000780190}, 0x4426b4?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0014d4708, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc0014d4708, {0xc001610000, 0x1000, 0x41345e?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x145
bufio.(*Reader).Read(0xc001b7bbc0, {0xc002070900, 0x9, 0x4134e0?})
	/usr/local/go/src/bufio/bufio.go:245 +0x197
io.ReadAtLeast({0x3f35d40, 0xc001b7bbc0}, {0xc002070900, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x91
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc002070900, 0x9, 0x9dbbb5?}, {0x3f35d40?, 0xc001b7bbc0?})
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.43.0/http2/frame.go:242 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0020708c0)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.43.0/http2/frame.go:506 +0x7d
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc001702fa8)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.43.0/http2/transport.go:2258 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc001ea4a80)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.43.0/http2/transport.go:2127 +0x79
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 3022
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.43.0/http2/transport.go:912 +0xde5

                                                
                                                
goroutine 2678 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3f806d0, 0xc0000844d0}, 0xc0000bbf50, 0xc0000bbf98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3f806d0, 0xc0000844d0}, 0xa0?, 0xc0000bbf50, 0xc0000bbf98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3f806d0?, 0xc0000844d0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0000bbfd0?, 0x5932a4?, 0xc0000844d0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 2571
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x286

                                                
                                                
goroutine 1496 [chan receive, 17 minutes]:
testing.(*T).Run(0xc0014041c0, {0x31f64c7?, 0x5a97220?}, 0xc0000108b8)
	/usr/local/go/src/testing/testing.go:1859 +0x431
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0014041c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:52 +0xf3
testing.tRunner(0xc0014041c0, 0x3c23600)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 2108 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000ab4b50, 0xf)
	/usr/local/go/src/runtime/sema.go:597 +0x159
sync.(*Cond).Wait(0xc001516ce0?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3f96240)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0014e4ea0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4c5c93?, 0xc000610180?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x13
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x3f806d0?, 0xc0000844d0?}, 0x41b1b4?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x51
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x3f806d0, 0xc0000844d0}, 0xc001516f50, {0x3f37720, 0xc001606750}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xe5
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x3f37720?, 0xc001606750?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x46
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001810aa0, 0x3b9aca00, 0x0, 0x1, 0xc0000844d0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 2151
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x1d9

                                                
                                                
goroutine 2763 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2762
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2515 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3f92dc0, {{0x3f87de8, 0xc0002483c0?}, 0xc001de4cb0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x378
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2514
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x272

                                                
                                                
goroutine 3135 [chan receive]:
testing.(*T).Run(0xc001ea5180, {0x32014a5?, 0xc00145e570?}, 0xc000115300)
	/usr/local/go/src/testing/testing.go:1859 +0x431
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001ea5180)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x2af
testing.tRunner(0xc001ea5180, 0xc000115280)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1786
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 2936 [chan receive]:
testing.(*T).Run(0xc00196ec40, {0x31f573a?, 0xc000509570?}, 0xc0002e1b00)
	/usr/local/go/src/testing/testing.go:1859 +0x431
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00196ec40)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x2af
testing.tRunner(0xc00196ec40, 0xc0001f5f00)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1787
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 2191 [chan receive, 7 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0xc001de0720, 0xc0000844d0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x295
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2158
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x614

                                                
                                                
goroutine 2761 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0008692d0, 0x0)
	/usr/local/go/src/runtime/sema.go:597 +0x159
sync.(*Cond).Wait(0xc0013dbce0?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3f96240)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000881c80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4c5c93?, 0xc0014e5d40?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x13
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x3f806d0?, 0xc0000844d0?}, 0x41b1b4?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x51
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x3f806d0, 0xc0000844d0}, 0xc0013dbf50, {0x3f37720, 0xc0005d09c0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xe5
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x3f37720?, 0xc0005d09c0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x46
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001f0c3d0, 0x3b9aca00, 0x0, 0x1, 0xc0000844d0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 2780
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x1d9

                                                
                                                
goroutine 2648 [chan receive, 5 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0xc0014e5680, 0xc0000844d0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x295
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2619
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x614

                                                
                                                
goroutine 2677 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc00045ead0, 0x0)
	/usr/local/go/src/runtime/sema.go:597 +0x159
sync.(*Cond).Wait(0xc0014a2ce0?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3f96240)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002309320)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x512aec8?, 0x5a97220?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x13
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x3f806d0?, 0xc0000844d0?}, 0x41b1b4?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x51
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x3f806d0, 0xc0000844d0}, 0xc0014a2f50, {0x3f37720, 0xc000703ce0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xe5
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00196efc0?, {0x3f37720?, 0xc000703ce0?}, 0xc0?, 0x55d160?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x46
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001907000, 0x3b9aca00, 0x0, 0x1, 0xc0000844d0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 2571
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x1d9

                                                
                                                
goroutine 2110 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2109
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2647 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3f92dc0, {{0x3f87de8, 0xc0002483c0?}, 0xc0014d30e0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x378
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2619
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x272

                                                
                                                
goroutine 1553 [chan receive, 15 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1753 +0x49b
testing.tRunner(0xc000602700, 0xc0000108b8)
	/usr/local/go/src/testing/testing.go:1798 +0x12d
created by testing.(*T).Run in goroutine 1496
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 2929 [IO wait]:
internal/poll.runtime_pollWait(0x72fdee6342d8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0002e1080?, 0xc0008e9000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0002e1080, {0xc0008e9000, 0x1800, 0x1800})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc0002e1080, {0xc0008e9000?, 0xc?, 0xc001523968?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc000780008, {0xc0008e9000?, 0x41b1b4?, 0xc0008e9005?})
	/usr/local/go/src/net/net.go:194 +0x45
crypto/tls.(*atLeastReader).Read(0xc001da6660, {0xc0008e9000?, 0x17fb?, 0x3?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc0014d42b8, {0x3f37dc0, 0xc001da6660})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0014d4008, {0x72fd86649240, 0xc000010048}, 0x4426b4?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0014d4008, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc0014d4008, {0xc000346000, 0x1000, 0x41345e?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x145
bufio.(*Reader).Read(0xc001de0900, {0xc0020704a0, 0x9, 0x4134e0?})
	/usr/local/go/src/bufio/bufio.go:245 +0x197
io.ReadAtLeast({0x3f35d40, 0xc001de0900}, {0xc0020704a0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x91
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0020704a0, 0x9, 0x9dbbb5?}, {0x3f35d40?, 0xc001de0900?})
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.43.0/http2/frame.go:242 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc002070460)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.43.0/http2/frame.go:506 +0x7d
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc001523fa8)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.43.0/http2/transport.go:2258 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc001ea41c0)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.43.0/http2/transport.go:2127 +0x79
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 2928
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.43.0/http2/transport.go:912 +0xde5

                                                
                                                
goroutine 2950 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3f92dc0, {{0x3f87de8, 0xc0002483c0?}, 0xc0002e1100?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x378
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3004
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x272

                                                
                                                
goroutine 1786 [chan receive]:
testing.(*T).Run(0xc001453340, {0x31f790b?, 0x0?}, 0xc000115280)
	/usr/local/go/src/testing/testing.go:1859 +0x431
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001453340)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0xb19
testing.tRunner(0xc001453340, 0xc000ab41c0)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1784
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 2981 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00090cd10, 0x0)
	/usr/local/go/src/runtime/sema.go:597 +0x159
sync.(*Cond).Wait(0xc001519ce0?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3f96240)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0017fd0e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0xc001de4930?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x13
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x3f806d0?, 0xc0000844d0?}, 0x41b1b4?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x51
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x3f806d0, 0xc0000844d0}, 0xc001519f50, {0x3f37720, 0xc000702300}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xe5
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0002483c0?, {0x3f37720?, 0xc000702300?}, 0x0?, 0xc001463760?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x46
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008ba0f0, 0x3b9aca00, 0x0, 0x1, 0xc0000844d0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 2951
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x1d9

                                                
                                                
goroutine 1787 [chan receive, 2 minutes]:
testing.(*T).Run(0xc001453500, {0x31f790b?, 0x0?}, 0xc0001f5f00)
	/usr/local/go/src/testing/testing.go:1859 +0x431
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001453500)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0xb19
testing.tRunner(0xc001453500, 0xc000ab4200)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1784
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 2109 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3f806d0, 0xc0000844d0}, 0xc000ae4f50, 0xc0014bbf98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3f806d0, 0xc0000844d0}, 0x50?, 0xc000ae4f50, 0xc000ae4f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3f806d0?, 0xc0000844d0?}, 0x0?, 0x55d160?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593245?, 0xc001506a80?, 0xc001a95d50?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 2151
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x286

                                                
                                                
goroutine 2570 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3f92dc0, {{0x3f87de8, 0xc0002483c0?}, 0xc001e2c380?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x378
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2672
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x272

                                                
                                                
goroutine 2217 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000868210, 0xf)
	/usr/local/go/src/runtime/sema.go:597 +0x159
sync.(*Cond).Wait(0xc0014a3ce0?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3f96240)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001de0720)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4c5c93?, 0xc0014e5860?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x13
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x3f806d0?, 0xc0000844d0?}, 0x41b1b4?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x51
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x3f806d0, 0xc0000844d0}, 0xc0014a3f50, {0x3f37720, 0xc0014c9140}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xe5
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x3f37720?, 0xc0014c9140?}, 0x0?, 0x55d160?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x46
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001b47d90, 0x3b9aca00, 0x0, 0x1, 0xc0000844d0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 2191
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x1d9

                                                
                                                
goroutine 2571 [chan receive, 5 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0xc002309320, 0xc0000844d0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x295
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2672
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x614

                                                
                                                
goroutine 2050 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1953
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2951 [chan receive]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0xc0017fd0e0, 0xc0000844d0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x295
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3004
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x614

                                                
                                                
goroutine 2394 [chan receive, 6 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0xc001aa92c0, 0xc0000844d0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x295
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2389
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x614

                                                
                                                
goroutine 3137 [IO wait]:
internal/poll.runtime_pollWait(0x72fdee634dc8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001759140?, 0xc0019d05f6?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001759140, {0xc0019d05f6, 0x20a, 0x20a})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000780678, {0xc0019d05f6?, 0x41ab46?, 0xc001460dd8?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc0014c9110, {0x3f35b20, 0xc0005367c8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f35ca0, 0xc0014c9110}, {0x3f35b20, 0xc0005367c8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000780678?, {0x3f35ca0, 0xc0014c9110})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc000780678, {0x3f35ca0, 0xc0014c9110})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3f35ca0, 0xc0014c9110}, {0x3f35ba0, 0xc000780678}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc000115300?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3136
	/usr/local/go/src/os/exec/exec.go:748 +0x92b

                                                
                                                
goroutine 1928 [IO wait]:
internal/poll.runtime_pollWait(0x72fdee6340a8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001b1a480?, 0xc0016d6375?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001b1a480, {0xc0016d6375, 0x19c8b, 0x19c8b})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0003ba0b0, {0xc0016d6375?, 0x41835f?, 0x2c473e0?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc001606420, {0x3f35b20, 0xc0005361d0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f35ca0, 0xc001606420}, {0x3f35b20, 0xc0005361d0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0003ba0b0?, {0x3f35ca0, 0xc001606420})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc0003ba0b0, {0x3f35ca0, 0xc001606420})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3f35ca0, 0xc001606420}, {0x3f35ba0, 0xc0003ba0b0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc0013be000?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1926
	/usr/local/go/src/os/exec/exec.go:748 +0x92b

                                                
                                                
goroutine 1927 [IO wait, 9 minutes]:
internal/poll.runtime_pollWait(0x72fdee634ee0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001b1a360?, 0xc0016fe2f1?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001b1a360, {0xc0016fe2f1, 0x50f, 0x50f})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0003ba098, {0xc0016fe2f1?, 0x41835f?, 0x2c473e0?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc0016063f0, {0x3f35b20, 0xc0000bd1f0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f35ca0, 0xc0016063f0}, {0x3f35b20, 0xc0000bd1f0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0003ba098?, {0x3f35ca0, 0xc0016063f0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc0003ba098, {0x3f35ca0, 0xc0016063f0})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3f35ca0, 0xc0016063f0}, {0x3f35ba0, 0xc0003ba098}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc001a95730?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1926
	/usr/local/go/src/os/exec/exec.go:748 +0x92b

                                                
                                                
goroutine 2317 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001f4e6d0, 0xe)
	/usr/local/go/src/runtime/sema.go:597 +0x159
sync.(*Cond).Wait(0xc001514ce0?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3f96240)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001aa92c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0xc000ab2270?, 0x481f72?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x13
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x3f806d0?, 0xc0000844d0?}, 0x41b1b4?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x51
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x3f806d0, 0xc0000844d0}, 0xc001514f50, {0x3f37720, 0xc000912570}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xe5
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x3f37720?, 0xc000912570?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x46
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0020282a0, 0x3b9aca00, 0x0, 0x1, 0xc0000844d0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 2394
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x1d9

                                                
                                                
goroutine 2318 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3f806d0, 0xc0000844d0}, 0xc001923750, 0xc001923798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3f806d0, 0xc0000844d0}, 0x2b?, 0xc001923750, 0xc001923798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3f806d0?, 0xc0000844d0?}, 0x0?, 0xc001923760?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x3f87de8?, 0xc0002483c0?, 0xc001532990?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 2394
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x286

                                                
                                                
goroutine 2319 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2318
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2780 [chan receive, 4 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0xc000881c80, 0xc0000844d0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x295
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2798
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x614

                                                
                                                
goroutine 3041 [syscall]:
syscall.Syscall6(0xf7, 0x3, 0x15, 0xc0000cdb18, 0x4, 0xc001766c60, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:95 +0x39
internal/syscall/unix.Waitid(0xc0000cdb46?, 0xc0000cdc70?, 0x5930ab?, 0x7ffcce7ac1d8?, 0x0?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x39
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:106
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:251
os.(*Process).pidfdWait(0xc001531d58?)
	/usr/local/go/src/os/pidfd_linux.go:105 +0x209
os.(*Process).wait(0x5c5f0a0?)
	/usr/local/go/src/os/exec_unix.go:27 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc0017d6a80)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc0017d6a80)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc00196e380, 0xc0017d6a80)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateStop({0x3f80350?, 0xc00043a0e0?}, 0xc00196e380, {0xc001daa860?, 0x0?}, {0xc001925750?, 0xc001925760?}, {0x55c773?, 0x4b6a93?}, {0xc00144eb00, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:226 +0x15d
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00196e380)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x66
testing.tRunner(0xc00196e380, 0xc0002e1b00)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 2936
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 2602 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3f806d0, 0xc0000844d0}, 0xc001546f50, 0xc001546f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3f806d0, 0xc0000844d0}, 0x6e?, 0xc001546f50, 0xc001546f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3f806d0?, 0xc0000844d0?}, 0x0?, 0xc001546f60?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x3f87de8?, 0xc0002483c0?, 0xc0014d30e0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 2648
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x286

                                                
                                                
goroutine 2603 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2602
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2679 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2678
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xbb

                                                
                                    

Test pass (123/190)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.75
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 4.66
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.16
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.65
22 TestOffline 81.39
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 131.9
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 9.56
35 TestAddons/parallel/Registry 16.91
36 TestAddons/parallel/RegistryCreds 0.86
38 TestAddons/parallel/InspektorGadget 11.88
39 TestAddons/parallel/MetricsServer 6.69
41 TestAddons/parallel/CSI 55.82
42 TestAddons/parallel/Headlamp 20.5
43 TestAddons/parallel/CloudSpanner 5.72
44 TestAddons/parallel/LocalPath 57.1
45 TestAddons/parallel/NvidiaDevicePlugin 5.99
46 TestAddons/parallel/Yakd 11.8
48 TestAddons/StoppedEnableDisable 90.5
49 TestCertOptions 45.22
50 TestCertExpiration 309.72
52 TestForceSystemdFlag 64.36
53 TestForceSystemdEnv 64.93
58 TestErrorSpam/setup 38.51
59 TestErrorSpam/start 0.34
60 TestErrorSpam/status 0.67
61 TestErrorSpam/pause 1.62
62 TestErrorSpam/unpause 1.94
63 TestErrorSpam/stop 4.95
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 78.59
68 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/KubeContext 0.05
82 TestFunctional/delete_echo-server_images 0
83 TestFunctional/delete_my-image_image 0
84 TestFunctional/delete_minikube_cached_images 0
89 TestMultiControlPlane/serial/StartCluster 240.36
90 TestMultiControlPlane/serial/DeployApp 6.6
91 TestMultiControlPlane/serial/PingHostFromPods 1.34
92 TestMultiControlPlane/serial/AddWorkerNode 46.42
93 TestMultiControlPlane/serial/NodeLabels 0.07
94 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.72
95 TestMultiControlPlane/serial/CopyFile 10.88
96 TestMultiControlPlane/serial/StopSecondaryNode 89.94
97 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.54
98 TestMultiControlPlane/serial/RestartSecondaryNode 40.65
99 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.79
112 TestJSONOutput/start/Command 54.41
113 TestJSONOutput/start/Audit 0
115 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
116 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
118 TestJSONOutput/pause/Command 0.72
119 TestJSONOutput/pause/Audit 0
121 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
122 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
124 TestJSONOutput/unpause/Command 0.64
125 TestJSONOutput/unpause/Audit 0
127 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
128 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
130 TestJSONOutput/stop/Command 7.68
131 TestJSONOutput/stop/Audit 0
133 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
134 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
135 TestErrorJSONOutput 0.23
140 TestMainNoArgs 0.06
141 TestMinikubeProfile 85.48
144 TestMountStart/serial/StartWithMountFirst 23.75
145 TestMountStart/serial/VerifyMountFirst 0.3
146 TestMountStart/serial/StartWithMountSecond 21.74
147 TestMountStart/serial/VerifyMountSecond 0.31
148 TestMountStart/serial/DeleteFirst 0.71
149 TestMountStart/serial/VerifyMountPostDelete 0.31
150 TestMountStart/serial/Stop 1.33
151 TestMountStart/serial/RestartStopped 21.09
152 TestMountStart/serial/VerifyMountPostStop 0.31
155 TestMultiNode/serial/FreshStart2Nodes 103.69
156 TestMultiNode/serial/DeployApp2Nodes 5.19
157 TestMultiNode/serial/PingHostFrom2Pods 0.88
158 TestMultiNode/serial/AddNode 44.28
159 TestMultiNode/serial/MultiNodeLabels 0.06
160 TestMultiNode/serial/ProfileList 0.47
161 TestMultiNode/serial/CopyFile 6.11
162 TestMultiNode/serial/StopNode 2.34
163 TestMultiNode/serial/StartAfterStop 39.92
164 TestMultiNode/serial/RestartKeepsNodes 312.18
165 TestMultiNode/serial/DeleteNode 2.6
166 TestMultiNode/serial/StopMultiNode 172.61
167 TestMultiNode/serial/RestartMultiNode 97.95
168 TestMultiNode/serial/ValidateNameConflict 44.34
175 TestScheduledStopUnix 113.86
179 TestRunningBinaryUpgrade 155.97
181 TestKubernetesUpgrade 237.28
194 TestISOImage/Setup 73.42
202 TestStoppedBinaryUpgrade/Setup 0.45
203 TestStoppedBinaryUpgrade/Upgrade 149.29
216 TestStoppedBinaryUpgrade/MinikubeLogs 1.42
218 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
219 TestNoKubernetes/serial/StartWithK8s 63.02
221 TestPause/serial/Start 121.72
223 TestNoKubernetes/serial/StartWithStopK8s 30.13
224 TestNoKubernetes/serial/Start 23.22
225 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
226 TestNoKubernetes/serial/VerifyK8sNotRunning 0.17
227 TestNoKubernetes/serial/ProfileList 1.68
228 TestNoKubernetes/serial/Stop 1.38
229 TestNoKubernetes/serial/StartNoArgs 20.62
230 TestPause/serial/SecondStartNoReconfiguration 49.26
231 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.16
233 TestPause/serial/Pause 0.77
234 TestPause/serial/VerifyStatus 0.24
235 TestPause/serial/Unpause 0.71
236 TestPause/serial/PauseAgain 0.91
237 TestPause/serial/DeletePaused 0.85
238 TestPause/serial/VerifyDeletedResources 0.74
x
+
TestDownloadOnly/v1.28.0/json-events (6.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-173287 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-173287 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (6.753935204s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1119 21:47:06.076599  121369 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1119 21:47:06.076698  121369 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-173287
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-173287: exit status 85 (73.770406ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
	โ”‚ COMMAND โ”‚                                                                                  ARGS                                                                                   โ”‚       PROFILE        โ”‚  USER   โ”‚ VERSION โ”‚     START TIME      โ”‚ END TIME โ”‚
	โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
	โ”‚ start   โ”‚ -o=json --download-only -p download-only-173287 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio โ”‚ download-only-173287 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:46 UTC โ”‚          โ”‚
	โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 21:46:59
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 21:46:59.375995  121381 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:46:59.376128  121381 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:46:59.376138  121381 out.go:374] Setting ErrFile to fd 2...
	I1119 21:46:59.376143  121381 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:46:59.376384  121381 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-117497/.minikube/bin
	W1119 21:46:59.376505  121381 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21918-117497/.minikube/config/config.json: open /home/jenkins/minikube-integration/21918-117497/.minikube/config/config.json: no such file or directory
	I1119 21:46:59.376999  121381 out.go:368] Setting JSON to true
	I1119 21:46:59.377840  121381 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":12566,"bootTime":1763576253,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 21:46:59.377912  121381 start.go:143] virtualization: kvm guest
	I1119 21:46:59.379913  121381 out.go:99] [download-only-173287] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1119 21:46:59.380071  121381 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball: no such file or directory
	I1119 21:46:59.380307  121381 notify.go:221] Checking for updates...
	I1119 21:46:59.382156  121381 out.go:171] MINIKUBE_LOCATION=21918
	I1119 21:46:59.383444  121381 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 21:46:59.384534  121381 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 21:46:59.385695  121381 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-117497/.minikube
	I1119 21:46:59.386702  121381 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1119 21:46:59.388673  121381 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1119 21:46:59.388924  121381 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 21:46:59.421954  121381 out.go:99] Using the kvm2 driver based on user configuration
	I1119 21:46:59.421987  121381 start.go:309] selected driver: kvm2
	I1119 21:46:59.422000  121381 start.go:930] validating driver "kvm2" against <nil>
	I1119 21:46:59.422312  121381 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 21:46:59.422778  121381 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1119 21:46:59.422940  121381 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1119 21:46:59.422971  121381 cni.go:84] Creating CNI manager for ""
	I1119 21:46:59.423021  121381 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1119 21:46:59.423029  121381 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1119 21:46:59.423067  121381 start.go:353] cluster config:
	{Name:download-only-173287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-173287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:46:59.423241  121381 iso.go:125] acquiring lock: {Name:mk95e5a645bfae75190ef550e02bd4f48b331040 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 21:46:59.424606  121381 out.go:99] Downloading VM boot image ...
	I1119 21:46:59.424641  121381 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21918-117497/.minikube/cache/iso/amd64/minikube-v1.37.0-1763575914-21918-amd64.iso
	I1119 21:47:02.448397  121381 out.go:99] Starting "download-only-173287" primary control-plane node in "download-only-173287" cluster
	I1119 21:47:02.448443  121381 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1119 21:47:02.465654  121381 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1119 21:47:02.465700  121381 cache.go:65] Caching tarball of preloaded images
	I1119 21:47:02.465933  121381 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1119 21:47:02.467401  121381 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1119 21:47:02.467423  121381 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1119 21:47:02.491775  121381 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1119 21:47:02.491945  121381 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-173287 host does not exist
	  To start a cluster, run: "minikube start -p download-only-173287"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-173287
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-103796 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-103796 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.659705365s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1119 21:47:11.112259  121369 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1119 21:47:11.112317  121369 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-117497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-103796
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-103796: exit status 85 (72.263523ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
	โ”‚ COMMAND โ”‚                                                                                  ARGS                                                                                   โ”‚       PROFILE        โ”‚  USER   โ”‚ VERSION โ”‚     START TIME      โ”‚      END TIME       โ”‚
	โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
	โ”‚ start   โ”‚ -o=json --download-only -p download-only-173287 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio โ”‚ download-only-173287 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:46 UTC โ”‚                     โ”‚
	โ”‚ delete  โ”‚ --all                                                                                                                                                                   โ”‚ minikube             โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:47 UTC โ”‚ 19 Nov 25 21:47 UTC โ”‚
	โ”‚ delete  โ”‚ -p download-only-173287                                                                                                                                                 โ”‚ download-only-173287 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:47 UTC โ”‚ 19 Nov 25 21:47 UTC โ”‚
	โ”‚ start   โ”‚ -o=json --download-only -p download-only-103796 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio โ”‚ download-only-103796 โ”‚ jenkins โ”‚ v1.37.0 โ”‚ 19 Nov 25 21:47 UTC โ”‚                     โ”‚
	โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 21:47:06
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 21:47:06.504616  121580 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:47:06.504910  121580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:47:06.504920  121580 out.go:374] Setting ErrFile to fd 2...
	I1119 21:47:06.504925  121580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:47:06.505201  121580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-117497/.minikube/bin
	I1119 21:47:06.505734  121580 out.go:368] Setting JSON to true
	I1119 21:47:06.506606  121580 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":12573,"bootTime":1763576253,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 21:47:06.506708  121580 start.go:143] virtualization: kvm guest
	I1119 21:47:06.508645  121580 out.go:99] [download-only-103796] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 21:47:06.508803  121580 notify.go:221] Checking for updates...
	I1119 21:47:06.510001  121580 out.go:171] MINIKUBE_LOCATION=21918
	I1119 21:47:06.511843  121580 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 21:47:06.513478  121580 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21918-117497/kubeconfig
	I1119 21:47:06.515140  121580 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-117497/.minikube
	I1119 21:47:06.516532  121580 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-103796 host does not exist
	  To start a cluster, run: "minikube start -p download-only-103796"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-103796
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I1119 21:47:11.781223  121369 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-538182 --alsologtostderr --binary-mirror http://127.0.0.1:41025 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-538182" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-538182
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestOffline (81.39s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-405415 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-405415 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m20.400240545s)
helpers_test.go:175: Cleaning up "offline-crio-405415" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-405415
--- PASS: TestOffline (81.39s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-638975
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-638975: exit status 85 (65.663086ms)

                                                
                                                
-- stdout --
	* Profile "addons-638975" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-638975"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-638975
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-638975: exit status 85 (66.113443ms)

                                                
                                                
-- stdout --
	* Profile "addons-638975" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-638975"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (131.9s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-638975 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-638975 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m11.899291011s)
--- PASS: TestAddons/Setup (131.90s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-638975 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-638975 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.56s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-638975 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-638975 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [524d4592-b3ee-4ebd-a2bb-c0e0834a1ed7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [524d4592-b3ee-4ebd-a2bb-c0e0834a1ed7] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004292159s
addons_test.go:694: (dbg) Run:  kubectl --context addons-638975 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-638975 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-638975 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.56s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.858611ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-s4jwv" [a2066b20-dd37-4423-97f0-1146b27baf9f] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.231272897s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-xjlbv" [d9c8a85c-fb02-41d8-b56a-31dd13e0aca3] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.006023427s
addons_test.go:392: (dbg) Run:  kubectl --context addons-638975 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-638975 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-638975 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.896672276s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-638975 ip
2025/11/19 21:49:59 [DEBUG] GET http://192.168.39.215:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-638975 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.91s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.86s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.560371ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-638975
addons_test.go:332: (dbg) Run:  kubectl --context addons-638975 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-638975 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.86s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.88s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-4q9mt" [5ad2b1ee-8739-4bd3-9e74-dec670dcb610] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.010636504s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-638975 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-638975 addons disable inspektor-gadget --alsologtostderr -v=1: (5.873032993s)
--- PASS: TestAddons/parallel/InspektorGadget (11.88s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.69s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 9.973743ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-xltwn" [3373e30d-0efb-4d43-a703-3d25e2b814da] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.229387809s
addons_test.go:463: (dbg) Run:  kubectl --context addons-638975 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-638975 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-638975 addons disable metrics-server --alsologtostderr -v=1: (1.358602993s)
--- PASS: TestAddons/parallel/MetricsServer (6.69s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.82s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1119 21:49:55.186699  121369 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1119 21:49:55.201627  121369 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1119 21:49:55.201661  121369 kapi.go:107] duration metric: took 14.97671ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 14.991098ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-638975 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-638975 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [c2d173e5-dfde-4934-8b19-120bb0869bd3] Pending
helpers_test.go:352: "task-pv-pod" [c2d173e5-dfde-4934-8b19-120bb0869bd3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [c2d173e5-dfde-4934-8b19-120bb0869bd3] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003798464s
addons_test.go:572: (dbg) Run:  kubectl --context addons-638975 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-638975 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-638975 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-638975 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-638975 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-638975 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-638975 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [3c327a40-9186-4bca-872b-0a2423ce32b9] Pending
helpers_test.go:352: "task-pv-pod-restore" [3c327a40-9186-4bca-872b-0a2423ce32b9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [3c327a40-9186-4bca-872b-0a2423ce32b9] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005428358s
addons_test.go:614: (dbg) Run:  kubectl --context addons-638975 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-638975 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-638975 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-638975 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-638975 addons disable volumesnapshots --alsologtostderr -v=1: (1.013705475s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-638975 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-638975 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.02021014s)
--- PASS: TestAddons/parallel/CSI (55.82s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-638975 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-z2kpr" [2060d32b-12cb-472d-8c47-f1cc38d5ae5e] Pending
helpers_test.go:352: "headlamp-6945c6f4d-z2kpr" [2060d32b-12cb-472d-8c47-f1cc38d5ae5e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-z2kpr" [2060d32b-12cb-472d-8c47-f1cc38d5ae5e] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.005203381s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-638975 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-638975 addons disable headlamp --alsologtostderr -v=1: (6.606232324s)
--- PASS: TestAddons/parallel/Headlamp (20.50s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-nxjpc" [efc84b5f-dab4-4273-84ba-8a5526b5b5dd] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004238488s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-638975 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.72s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.1s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-638975 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-638975 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-638975 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [584f7b8f-d608-4ab1-af7a-d4d852fad442] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [584f7b8f-d608-4ab1-af7a-d4d852fad442] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [584f7b8f-d608-4ab1-af7a-d4d852fad442] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.003539456s
addons_test.go:967: (dbg) Run:  kubectl --context addons-638975 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-638975 ssh "cat /opt/local-path-provisioner/pvc-2210aff6-240c-446f-aed2-60b4ee919562_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-638975 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-638975 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-638975 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-638975 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.100587872s)
--- PASS: TestAddons/parallel/LocalPath (57.10s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.99s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-p5fgq" [dc9daaaa-4317-43a2-b831-d637962e0d5f] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.240533622s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-638975 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.99s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-pz6l2" [49de711e-1f72-47ff-b241-c34cd783a626] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004891751s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-638975 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-638975 addons disable yakd --alsologtostderr -v=1: (5.790641229s)
--- PASS: TestAddons/parallel/Yakd (11.80s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (90.5s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-638975
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-638975: (1m30.291114532s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-638975
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-638975
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-638975
--- PASS: TestAddons/StoppedEnableDisable (90.50s)

                                                
                                    
x
+
TestCertOptions (45.22s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-037434 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-037434 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (43.710471313s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-037434 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-037434 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-037434 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-037434" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-037434
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-037434: (1.050461446s)
--- PASS: TestCertOptions (45.22s)

                                                
                                    
x
+
TestCertExpiration (309.72s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-146414 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-146414 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m21.651566504s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-146414 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-146414 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (47.136261814s)
helpers_test.go:175: Cleaning up "cert-expiration-146414" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-146414
--- PASS: TestCertExpiration (309.72s)

                                                
                                    
x
+
TestForceSystemdFlag (64.36s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-580074 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-580074 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m3.131557386s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-580074 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-580074" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-580074
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-580074: (1.03933857s)
--- PASS: TestForceSystemdFlag (64.36s)

                                                
                                    
x
+
TestForceSystemdEnv (64.93s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-076578 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-076578 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m4.042847343s)
helpers_test.go:175: Cleaning up "force-systemd-env-076578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-076578
--- PASS: TestForceSystemdEnv (64.93s)

                                                
                                    
x
+
TestErrorSpam/setup (38.51s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-527873 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-527873 --driver=kvm2  --container-runtime=crio
E1119 21:54:25.102809  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:25.109189  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:25.120562  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:25.141988  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:25.183427  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:25.264869  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:25.426421  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:25.748154  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:26.390261  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:27.672534  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:30.235065  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:35.357091  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:45.599113  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-527873 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-527873 --driver=kvm2  --container-runtime=crio: (38.506623229s)
--- PASS: TestErrorSpam/setup (38.51s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527873 --log_dir /tmp/nospam-527873 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527873 --log_dir /tmp/nospam-527873 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527873 --log_dir /tmp/nospam-527873 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.67s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527873 --log_dir /tmp/nospam-527873 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527873 --log_dir /tmp/nospam-527873 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527873 --log_dir /tmp/nospam-527873 status
--- PASS: TestErrorSpam/status (0.67s)

                                                
                                    
x
+
TestErrorSpam/pause (1.62s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527873 --log_dir /tmp/nospam-527873 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527873 --log_dir /tmp/nospam-527873 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527873 --log_dir /tmp/nospam-527873 pause
--- PASS: TestErrorSpam/pause (1.62s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.94s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527873 --log_dir /tmp/nospam-527873 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527873 --log_dir /tmp/nospam-527873 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527873 --log_dir /tmp/nospam-527873 unpause
--- PASS: TestErrorSpam/unpause (1.94s)

                                                
                                    
x
+
TestErrorSpam/stop (4.95s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527873 --log_dir /tmp/nospam-527873 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-527873 --log_dir /tmp/nospam-527873 stop: (2.102845165s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527873 --log_dir /tmp/nospam-527873 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-527873 --log_dir /tmp/nospam-527873 stop: (1.519928633s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-527873 --log_dir /tmp/nospam-527873 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-527873 --log_dir /tmp/nospam-527873 stop: (1.33012618s)
--- PASS: TestErrorSpam/stop (4.95s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21918-117497/.minikube/files/etc/test/nested/copy/121369/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (78.59s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-274272 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1119 21:55:06.080903  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:55:47.043319  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-274272 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m18.585730034s)
--- PASS: TestFunctional/serial/StartWithProxy (78.59s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Non-zero exit: docker rmi -f kicbase/echo-server:1.0: context deadline exceeded (941ns)
functional_test.go:207: failed to remove image "kicbase/echo-server:1.0" from docker images. args "docker rmi -f kicbase/echo-server:1.0": context deadline exceeded
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-274272
functional_test.go:205: (dbg) Non-zero exit: docker rmi -f kicbase/echo-server:functional-274272: context deadline exceeded (367ns)
functional_test.go:207: failed to remove image "kicbase/echo-server:functional-274272" from docker images. args "docker rmi -f kicbase/echo-server:functional-274272": context deadline exceeded
--- PASS: TestFunctional/delete_echo-server_images (0.00s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-274272
functional_test.go:213: (dbg) Non-zero exit: docker rmi -f localhost/my-image:functional-274272: context deadline exceeded (463ns)
functional_test.go:215: failed to remove image my-image from docker images. args "docker rmi -f localhost/my-image:functional-274272": context deadline exceeded
--- PASS: TestFunctional/delete_my-image_image (0.00s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-274272
functional_test.go:221: (dbg) Non-zero exit: docker rmi -f minikube-local-cache-test:functional-274272: context deadline exceeded (379ns)
functional_test.go:223: failed to remove image minikube local cache test images from docker. args "docker rmi -f minikube-local-cache-test:functional-274272": context deadline exceeded
--- PASS: TestFunctional/delete_minikube_cached_images (0.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (240.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1119 22:49:25.094595  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-487903 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m59.779710434s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (240.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-487903 kubectl -- rollout status deployment/busybox: (4.243435738s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 kubectl -- exec busybox-7b57f96db7-6q5gq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 kubectl -- exec busybox-7b57f96db7-vl8nf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 kubectl -- exec busybox-7b57f96db7-xjvfn -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 kubectl -- exec busybox-7b57f96db7-6q5gq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 kubectl -- exec busybox-7b57f96db7-vl8nf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 kubectl -- exec busybox-7b57f96db7-xjvfn -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 kubectl -- exec busybox-7b57f96db7-6q5gq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 kubectl -- exec busybox-7b57f96db7-vl8nf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 kubectl -- exec busybox-7b57f96db7-xjvfn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 kubectl -- exec busybox-7b57f96db7-6q5gq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 kubectl -- exec busybox-7b57f96db7-6q5gq -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 kubectl -- exec busybox-7b57f96db7-vl8nf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 kubectl -- exec busybox-7b57f96db7-vl8nf -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 kubectl -- exec busybox-7b57f96db7-xjvfn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 kubectl -- exec busybox-7b57f96db7-xjvfn -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (46.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-487903 node add --alsologtostderr -v 5: (45.729710992s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (46.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-487903 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 cp testdata/cp-test.txt ha-487903:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 ssh -n ha-487903 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 cp ha-487903:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile651617511/001/cp-test_ha-487903.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 ssh -n ha-487903 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 cp ha-487903:/home/docker/cp-test.txt ha-487903-m02:/home/docker/cp-test_ha-487903_ha-487903-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 ssh -n ha-487903 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 ssh -n ha-487903-m02 "sudo cat /home/docker/cp-test_ha-487903_ha-487903-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 cp ha-487903:/home/docker/cp-test.txt ha-487903-m03:/home/docker/cp-test_ha-487903_ha-487903-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 ssh -n ha-487903 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 ssh -n ha-487903-m03 "sudo cat /home/docker/cp-test_ha-487903_ha-487903-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 cp ha-487903:/home/docker/cp-test.txt ha-487903-m04:/home/docker/cp-test_ha-487903_ha-487903-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 ssh -n ha-487903 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 ssh -n ha-487903-m04 "sudo cat /home/docker/cp-test_ha-487903_ha-487903-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 cp testdata/cp-test.txt ha-487903-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 ssh -n ha-487903-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 cp ha-487903-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile651617511/001/cp-test_ha-487903-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 ssh -n ha-487903-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 cp ha-487903-m02:/home/docker/cp-test.txt ha-487903:/home/docker/cp-test_ha-487903-m02_ha-487903.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 ssh -n ha-487903-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 ssh -n ha-487903 "sudo cat /home/docker/cp-test_ha-487903-m02_ha-487903.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 cp ha-487903-m02:/home/docker/cp-test.txt ha-487903-m03:/home/docker/cp-test_ha-487903-m02_ha-487903-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 ssh -n ha-487903-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 ssh -n ha-487903-m03 "sudo cat /home/docker/cp-test_ha-487903-m02_ha-487903-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 cp ha-487903-m02:/home/docker/cp-test.txt ha-487903-m04:/home/docker/cp-test_ha-487903-m02_ha-487903-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 ssh -n ha-487903-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 ssh -n ha-487903-m04 "sudo cat /home/docker/cp-test_ha-487903-m02_ha-487903-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 cp testdata/cp-test.txt ha-487903-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 ssh -n ha-487903-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 cp ha-487903-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile651617511/001/cp-test_ha-487903-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 ssh -n ha-487903-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 cp ha-487903-m03:/home/docker/cp-test.txt ha-487903:/home/docker/cp-test_ha-487903-m03_ha-487903.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 ssh -n ha-487903-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 ssh -n ha-487903 "sudo cat /home/docker/cp-test_ha-487903-m03_ha-487903.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 cp ha-487903-m03:/home/docker/cp-test.txt ha-487903-m02:/home/docker/cp-test_ha-487903-m03_ha-487903-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 ssh -n ha-487903-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 ssh -n ha-487903-m02 "sudo cat /home/docker/cp-test_ha-487903-m03_ha-487903-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 cp ha-487903-m03:/home/docker/cp-test.txt ha-487903-m04:/home/docker/cp-test_ha-487903-m03_ha-487903-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 ssh -n ha-487903-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 ssh -n ha-487903-m04 "sudo cat /home/docker/cp-test_ha-487903-m03_ha-487903-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 cp testdata/cp-test.txt ha-487903-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 ssh -n ha-487903-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 cp ha-487903-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile651617511/001/cp-test_ha-487903-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 ssh -n ha-487903-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 cp ha-487903-m04:/home/docker/cp-test.txt ha-487903:/home/docker/cp-test_ha-487903-m04_ha-487903.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 ssh -n ha-487903-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 ssh -n ha-487903 "sudo cat /home/docker/cp-test_ha-487903-m04_ha-487903.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 cp ha-487903-m04:/home/docker/cp-test.txt ha-487903-m02:/home/docker/cp-test_ha-487903-m04_ha-487903-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 ssh -n ha-487903-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 ssh -n ha-487903-m02 "sudo cat /home/docker/cp-test_ha-487903-m04_ha-487903-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 cp ha-487903-m04:/home/docker/cp-test.txt ha-487903-m03:/home/docker/cp-test_ha-487903-m04_ha-487903-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 ssh -n ha-487903-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 ssh -n ha-487903-m03 "sudo cat /home/docker/cp-test_ha-487903-m04_ha-487903-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (89.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-487903 node stop m02 --alsologtostderr -v 5: (1m29.396207328s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-487903 status --alsologtostderr -v 5: exit status 7 (545.45066ms)

                                                
                                                
-- stdout --
	ha-487903
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-487903-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-487903-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-487903-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 22:53:56.544415  139573 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:53:56.544716  139573 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:53:56.544727  139573 out.go:374] Setting ErrFile to fd 2...
	I1119 22:53:56.544732  139573 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:53:56.544909  139573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-117497/.minikube/bin
	I1119 22:53:56.545101  139573 out.go:368] Setting JSON to false
	I1119 22:53:56.545135  139573 mustload.go:66] Loading cluster: ha-487903
	I1119 22:53:56.545212  139573 notify.go:221] Checking for updates...
	I1119 22:53:56.545513  139573 config.go:182] Loaded profile config "ha-487903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:53:56.545528  139573 status.go:174] checking status of ha-487903 ...
	I1119 22:53:56.547571  139573 status.go:371] ha-487903 host status = "Running" (err=<nil>)
	I1119 22:53:56.547598  139573 host.go:66] Checking if "ha-487903" exists ...
	I1119 22:53:56.550484  139573 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:53:56.551037  139573 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:47:36 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:53:56.551069  139573 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:53:56.551219  139573 host.go:66] Checking if "ha-487903" exists ...
	I1119 22:53:56.551471  139573 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:53:56.553530  139573 main.go:143] libmachine: domain ha-487903 has defined MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:53:56.553842  139573 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:81:53", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:47:36 +0000 UTC Type:0 Mac:52:54:00:a9:81:53 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:ha-487903 Clientid:01:52:54:00:a9:81:53}
	I1119 22:53:56.553864  139573 main.go:143] libmachine: domain ha-487903 has defined IP address 192.168.39.15 and MAC address 52:54:00:a9:81:53 in network mk-ha-487903
	I1119 22:53:56.554040  139573 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903/id_rsa Username:docker}
	I1119 22:53:56.650899  139573 ssh_runner.go:195] Run: systemctl --version
	I1119 22:53:56.660112  139573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:53:56.681865  139573 kubeconfig.go:125] found "ha-487903" server: "https://192.168.39.254:8443"
	I1119 22:53:56.681945  139573 api_server.go:166] Checking apiserver status ...
	I1119 22:53:56.682005  139573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:53:56.712021  139573 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1400/cgroup
	W1119 22:53:56.726678  139573 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1400/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1119 22:53:56.726790  139573 ssh_runner.go:195] Run: ls
	I1119 22:53:56.733015  139573 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1119 22:53:56.738608  139573 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1119 22:53:56.738635  139573 status.go:463] ha-487903 apiserver status = Running (err=<nil>)
	I1119 22:53:56.738647  139573 status.go:176] ha-487903 status: &{Name:ha-487903 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 22:53:56.738682  139573 status.go:174] checking status of ha-487903-m02 ...
	I1119 22:53:56.740562  139573 status.go:371] ha-487903-m02 host status = "Stopped" (err=<nil>)
	I1119 22:53:56.740585  139573 status.go:384] host is not running, skipping remaining checks
	I1119 22:53:56.740593  139573 status.go:176] ha-487903-m02 status: &{Name:ha-487903-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 22:53:56.740617  139573 status.go:174] checking status of ha-487903-m03 ...
	I1119 22:53:56.741995  139573 status.go:371] ha-487903-m03 host status = "Running" (err=<nil>)
	I1119 22:53:56.742014  139573 host.go:66] Checking if "ha-487903-m03" exists ...
	I1119 22:53:56.744577  139573 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 22:53:56.745004  139573 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:50:12 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 22:53:56.745034  139573 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 22:53:56.745197  139573 host.go:66] Checking if "ha-487903-m03" exists ...
	I1119 22:53:56.745451  139573 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:53:56.747668  139573 main.go:143] libmachine: domain ha-487903-m03 has defined MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 22:53:56.748036  139573 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:3d", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:50:12 +0000 UTC Type:0 Mac:52:54:00:b3:68:3d Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-487903-m03 Clientid:01:52:54:00:b3:68:3d}
	I1119 22:53:56.748059  139573 main.go:143] libmachine: domain ha-487903-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:b3:68:3d in network mk-ha-487903
	I1119 22:53:56.748216  139573 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m03/id_rsa Username:docker}
	I1119 22:53:56.834395  139573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:53:56.858172  139573 kubeconfig.go:125] found "ha-487903" server: "https://192.168.39.254:8443"
	I1119 22:53:56.858208  139573 api_server.go:166] Checking apiserver status ...
	I1119 22:53:56.858253  139573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:53:56.881250  139573 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1756/cgroup
	W1119 22:53:56.894681  139573 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1756/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1119 22:53:56.894752  139573 ssh_runner.go:195] Run: ls
	I1119 22:53:56.900183  139573 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1119 22:53:56.905114  139573 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1119 22:53:56.905139  139573 status.go:463] ha-487903-m03 apiserver status = Running (err=<nil>)
	I1119 22:53:56.905149  139573 status.go:176] ha-487903-m03 status: &{Name:ha-487903-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 22:53:56.905168  139573 status.go:174] checking status of ha-487903-m04 ...
	I1119 22:53:56.907030  139573 status.go:371] ha-487903-m04 host status = "Running" (err=<nil>)
	I1119 22:53:56.907057  139573 host.go:66] Checking if "ha-487903-m04" exists ...
	I1119 22:53:56.909750  139573 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 22:53:56.910228  139573 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:51:45 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 22:53:56.910265  139573 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 22:53:56.910472  139573 host.go:66] Checking if "ha-487903-m04" exists ...
	I1119 22:53:56.910691  139573 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:53:56.912932  139573 main.go:143] libmachine: domain ha-487903-m04 has defined MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 22:53:56.913277  139573 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:eb:f3:c3", ip: ""} in network mk-ha-487903: {Iface:virbr1 ExpiryTime:2025-11-19 23:51:45 +0000 UTC Type:0 Mac:52:54:00:eb:f3:c3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-487903-m04 Clientid:01:52:54:00:eb:f3:c3}
	I1119 22:53:56.913300  139573 main.go:143] libmachine: domain ha-487903-m04 has defined IP address 192.168.39.187 and MAC address 52:54:00:eb:f3:c3 in network mk-ha-487903
	I1119 22:53:56.913471  139573 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/ha-487903-m04/id_rsa Username:docker}
	I1119 22:53:57.000564  139573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:53:57.021092  139573 status.go:176] ha-487903-m04 status: &{Name:ha-487903-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (89.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (40.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 node start m02 --alsologtostderr -v 5
E1119 22:54:25.094445  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-487903 node start m02 --alsologtostderr -v 5: (39.698403787s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-487903 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (40.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.79s)

                                                
                                    
x
+
TestJSONOutput/start/Command (54.41s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-778681 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-778681 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (54.408133229s)
--- PASS: TestJSONOutput/start/Command (54.41s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-778681 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-778681 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.68s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-778681 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-778681 --output=json --user=testUser: (7.681836571s)
--- PASS: TestJSONOutput/stop/Command (7.68s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-170080 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-170080 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (77.572846ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c83ab9cc-272b-4cd2-a3a2-3e5debdfb915","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-170080] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a731ec8b-3c31-4294-9c54-c58480db8d64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21918"}}
	{"specversion":"1.0","id":"bbdf3ce6-c4a2-48ab-9aca-f30f9fa0f14b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"08325e93-7312-4ca4-bfcc-aad1b566c7b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21918-117497/kubeconfig"}}
	{"specversion":"1.0","id":"eb196bf3-6492-46e3-8464-9b481979b7e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-117497/.minikube"}}
	{"specversion":"1.0","id":"30f08a5f-6ce5-4ea9-b86c-f8da038668c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8ad16610-28a1-477f-9282-c0694dfc8bab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0d157f33-1fe0-46fc-93f0-dcebb5d7a39c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-170080" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-170080
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (85.48s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-506425 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-506425 --driver=kvm2  --container-runtime=crio: (40.679245073s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-509582 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-509582 --driver=kvm2  --container-runtime=crio: (42.117747957s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-506425
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-509582
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-509582" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-509582
helpers_test.go:175: Cleaning up "first-506425" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-506425
--- PASS: TestMinikubeProfile (85.48s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (23.75s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-985085 --memory=3072 --mount-string /tmp/TestMountStartserial1805269432/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-985085 --memory=3072 --mount-string /tmp/TestMountStartserial1805269432/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (22.750324988s)
--- PASS: TestMountStart/serial/StartWithMountFirst (23.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-985085 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-985085 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (21.74s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-005062 --memory=3072 --mount-string /tmp/TestMountStartserial1805269432/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-005062 --memory=3072 --mount-string /tmp/TestMountStartserial1805269432/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (20.735494153s)
--- PASS: TestMountStart/serial/StartWithMountSecond (21.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-005062 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-005062 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-985085 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-005062 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-005062 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-005062
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-005062: (1.326678633s)
--- PASS: TestMountStart/serial/Stop (1.33s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.09s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-005062
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-005062: (20.085897187s)
--- PASS: TestMountStart/serial/RestartStopped (21.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-005062 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-005062 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (103.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-966771 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1119 23:14:25.095767  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-966771 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m43.337751317s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (103.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-966771 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-966771 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-966771 -- rollout status deployment/busybox: (3.608463882s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-966771 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-966771 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-966771 -- exec busybox-7b57f96db7-kwb8j -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-966771 -- exec busybox-7b57f96db7-qt8gc -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-966771 -- exec busybox-7b57f96db7-kwb8j -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-966771 -- exec busybox-7b57f96db7-qt8gc -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-966771 -- exec busybox-7b57f96db7-kwb8j -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-966771 -- exec busybox-7b57f96db7-qt8gc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.19s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-966771 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-966771 -- exec busybox-7b57f96db7-kwb8j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-966771 -- exec busybox-7b57f96db7-kwb8j -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-966771 -- exec busybox-7b57f96db7-qt8gc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-966771 -- exec busybox-7b57f96db7-qt8gc -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (44.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-966771 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-966771 -v=5 --alsologtostderr: (43.827459508s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (44.28s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-966771 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.47s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 cp testdata/cp-test.txt multinode-966771:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 ssh -n multinode-966771 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 cp multinode-966771:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3532697773/001/cp-test_multinode-966771.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 ssh -n multinode-966771 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 cp multinode-966771:/home/docker/cp-test.txt multinode-966771-m02:/home/docker/cp-test_multinode-966771_multinode-966771-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 ssh -n multinode-966771 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 ssh -n multinode-966771-m02 "sudo cat /home/docker/cp-test_multinode-966771_multinode-966771-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 cp multinode-966771:/home/docker/cp-test.txt multinode-966771-m03:/home/docker/cp-test_multinode-966771_multinode-966771-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 ssh -n multinode-966771 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 ssh -n multinode-966771-m03 "sudo cat /home/docker/cp-test_multinode-966771_multinode-966771-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 cp testdata/cp-test.txt multinode-966771-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 ssh -n multinode-966771-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 cp multinode-966771-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3532697773/001/cp-test_multinode-966771-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 ssh -n multinode-966771-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 cp multinode-966771-m02:/home/docker/cp-test.txt multinode-966771:/home/docker/cp-test_multinode-966771-m02_multinode-966771.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 ssh -n multinode-966771-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 ssh -n multinode-966771 "sudo cat /home/docker/cp-test_multinode-966771-m02_multinode-966771.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 cp multinode-966771-m02:/home/docker/cp-test.txt multinode-966771-m03:/home/docker/cp-test_multinode-966771-m02_multinode-966771-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 ssh -n multinode-966771-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 ssh -n multinode-966771-m03 "sudo cat /home/docker/cp-test_multinode-966771-m02_multinode-966771-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 cp testdata/cp-test.txt multinode-966771-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 ssh -n multinode-966771-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 cp multinode-966771-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3532697773/001/cp-test_multinode-966771-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 ssh -n multinode-966771-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 cp multinode-966771-m03:/home/docker/cp-test.txt multinode-966771:/home/docker/cp-test_multinode-966771-m03_multinode-966771.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 ssh -n multinode-966771-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 ssh -n multinode-966771 "sudo cat /home/docker/cp-test_multinode-966771-m03_multinode-966771.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 cp multinode-966771-m03:/home/docker/cp-test.txt multinode-966771-m02:/home/docker/cp-test_multinode-966771-m03_multinode-966771-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 ssh -n multinode-966771-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 ssh -n multinode-966771-m02 "sudo cat /home/docker/cp-test_multinode-966771-m03_multinode-966771-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-966771 node stop m03: (1.660714434s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-966771 status: exit status 7 (345.389024ms)

                                                
                                                
-- stdout --
	multinode-966771
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-966771-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-966771-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-966771 status --alsologtostderr: exit status 7 (329.893126ms)

                                                
                                                
-- stdout --
	multinode-966771
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-966771-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-966771-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 23:15:51.242183  148717 out.go:360] Setting OutFile to fd 1 ...
	I1119 23:15:51.242411  148717 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:15:51.242419  148717 out.go:374] Setting ErrFile to fd 2...
	I1119 23:15:51.242423  148717 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:15:51.242616  148717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-117497/.minikube/bin
	I1119 23:15:51.242795  148717 out.go:368] Setting JSON to false
	I1119 23:15:51.242826  148717 mustload.go:66] Loading cluster: multinode-966771
	I1119 23:15:51.242891  148717 notify.go:221] Checking for updates...
	I1119 23:15:51.243242  148717 config.go:182] Loaded profile config "multinode-966771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:15:51.243260  148717 status.go:174] checking status of multinode-966771 ...
	I1119 23:15:51.245468  148717 status.go:371] multinode-966771 host status = "Running" (err=<nil>)
	I1119 23:15:51.245488  148717 host.go:66] Checking if "multinode-966771" exists ...
	I1119 23:15:51.247956  148717 main.go:143] libmachine: domain multinode-966771 has defined MAC address 52:54:00:f4:83:8e in network mk-multinode-966771
	I1119 23:15:51.248484  148717 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f4:83:8e", ip: ""} in network mk-multinode-966771: {Iface:virbr1 ExpiryTime:2025-11-20 00:13:24 +0000 UTC Type:0 Mac:52:54:00:f4:83:8e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-966771 Clientid:01:52:54:00:f4:83:8e}
	I1119 23:15:51.248510  148717 main.go:143] libmachine: domain multinode-966771 has defined IP address 192.168.39.43 and MAC address 52:54:00:f4:83:8e in network mk-multinode-966771
	I1119 23:15:51.248641  148717 host.go:66] Checking if "multinode-966771" exists ...
	I1119 23:15:51.248899  148717 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 23:15:51.251233  148717 main.go:143] libmachine: domain multinode-966771 has defined MAC address 52:54:00:f4:83:8e in network mk-multinode-966771
	I1119 23:15:51.251568  148717 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f4:83:8e", ip: ""} in network mk-multinode-966771: {Iface:virbr1 ExpiryTime:2025-11-20 00:13:24 +0000 UTC Type:0 Mac:52:54:00:f4:83:8e Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:multinode-966771 Clientid:01:52:54:00:f4:83:8e}
	I1119 23:15:51.251596  148717 main.go:143] libmachine: domain multinode-966771 has defined IP address 192.168.39.43 and MAC address 52:54:00:f4:83:8e in network mk-multinode-966771
	I1119 23:15:51.251715  148717 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/multinode-966771/id_rsa Username:docker}
	I1119 23:15:51.338612  148717 ssh_runner.go:195] Run: systemctl --version
	I1119 23:15:51.345027  148717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 23:15:51.363303  148717 kubeconfig.go:125] found "multinode-966771" server: "https://192.168.39.43:8443"
	I1119 23:15:51.363345  148717 api_server.go:166] Checking apiserver status ...
	I1119 23:15:51.363390  148717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 23:15:51.384682  148717 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1376/cgroup
	W1119 23:15:51.396495  148717 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1376/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1119 23:15:51.396544  148717 ssh_runner.go:195] Run: ls
	I1119 23:15:51.401756  148717 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I1119 23:15:51.407302  148717 api_server.go:279] https://192.168.39.43:8443/healthz returned 200:
	ok
	I1119 23:15:51.407324  148717 status.go:463] multinode-966771 apiserver status = Running (err=<nil>)
	I1119 23:15:51.407336  148717 status.go:176] multinode-966771 status: &{Name:multinode-966771 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 23:15:51.407367  148717 status.go:174] checking status of multinode-966771-m02 ...
	I1119 23:15:51.408992  148717 status.go:371] multinode-966771-m02 host status = "Running" (err=<nil>)
	I1119 23:15:51.409010  148717 host.go:66] Checking if "multinode-966771-m02" exists ...
	I1119 23:15:51.411676  148717 main.go:143] libmachine: domain multinode-966771-m02 has defined MAC address 52:54:00:e5:5e:69 in network mk-multinode-966771
	I1119 23:15:51.412059  148717 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:5e:69", ip: ""} in network mk-multinode-966771: {Iface:virbr1 ExpiryTime:2025-11-20 00:14:22 +0000 UTC Type:0 Mac:52:54:00:e5:5e:69 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-966771-m02 Clientid:01:52:54:00:e5:5e:69}
	I1119 23:15:51.412098  148717 main.go:143] libmachine: domain multinode-966771-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:e5:5e:69 in network mk-multinode-966771
	I1119 23:15:51.412255  148717 host.go:66] Checking if "multinode-966771-m02" exists ...
	I1119 23:15:51.412511  148717 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 23:15:51.414628  148717 main.go:143] libmachine: domain multinode-966771-m02 has defined MAC address 52:54:00:e5:5e:69 in network mk-multinode-966771
	I1119 23:15:51.415020  148717 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:5e:69", ip: ""} in network mk-multinode-966771: {Iface:virbr1 ExpiryTime:2025-11-20 00:14:22 +0000 UTC Type:0 Mac:52:54:00:e5:5e:69 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-966771-m02 Clientid:01:52:54:00:e5:5e:69}
	I1119 23:15:51.415054  148717 main.go:143] libmachine: domain multinode-966771-m02 has defined IP address 192.168.39.44 and MAC address 52:54:00:e5:5e:69 in network mk-multinode-966771
	I1119 23:15:51.415216  148717 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21918-117497/.minikube/machines/multinode-966771-m02/id_rsa Username:docker}
	I1119 23:15:51.495083  148717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 23:15:51.511920  148717 status.go:176] multinode-966771-m02 status: &{Name:multinode-966771-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1119 23:15:51.511972  148717 status.go:174] checking status of multinode-966771-m03 ...
	I1119 23:15:51.513553  148717 status.go:371] multinode-966771-m03 host status = "Stopped" (err=<nil>)
	I1119 23:15:51.513577  148717 status.go:384] host is not running, skipping remaining checks
	I1119 23:15:51.513585  148717 status.go:176] multinode-966771-m03 status: &{Name:multinode-966771-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.34s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-966771 node start m03 -v=5 --alsologtostderr: (39.38731193s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.92s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (312.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-966771
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-966771
E1119 23:17:28.187017  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 23:19:25.103503  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-966771: (2m54.551701879s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-966771 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-966771 --wait=true -v=5 --alsologtostderr: (2m17.494356698s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-966771
--- PASS: TestMultiNode/serial/RestartKeepsNodes (312.18s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-966771 node delete m03: (2.128188711s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.60s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (172.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 stop
E1119 23:24:25.103136  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-966771 stop: (2m52.483185159s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-966771 status: exit status 7 (65.536797ms)

                                                
                                                
-- stdout --
	multinode-966771
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-966771-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-966771 status --alsologtostderr: exit status 7 (64.010383ms)

                                                
                                                
-- stdout --
	multinode-966771
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-966771-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 23:24:38.817393  151598 out.go:360] Setting OutFile to fd 1 ...
	I1119 23:24:38.817658  151598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:24:38.817667  151598 out.go:374] Setting ErrFile to fd 2...
	I1119 23:24:38.817671  151598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:24:38.817853  151598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-117497/.minikube/bin
	I1119 23:24:38.818056  151598 out.go:368] Setting JSON to false
	I1119 23:24:38.818095  151598 mustload.go:66] Loading cluster: multinode-966771
	I1119 23:24:38.818209  151598 notify.go:221] Checking for updates...
	I1119 23:24:38.818670  151598 config.go:182] Loaded profile config "multinode-966771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:24:38.818689  151598 status.go:174] checking status of multinode-966771 ...
	I1119 23:24:38.821018  151598 status.go:371] multinode-966771 host status = "Stopped" (err=<nil>)
	I1119 23:24:38.821035  151598 status.go:384] host is not running, skipping remaining checks
	I1119 23:24:38.821050  151598 status.go:176] multinode-966771 status: &{Name:multinode-966771 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 23:24:38.821077  151598 status.go:174] checking status of multinode-966771-m02 ...
	I1119 23:24:38.822237  151598 status.go:371] multinode-966771-m02 host status = "Stopped" (err=<nil>)
	I1119 23:24:38.822249  151598 status.go:384] host is not running, skipping remaining checks
	I1119 23:24:38.822253  151598 status.go:176] multinode-966771-m02 status: &{Name:multinode-966771-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (172.61s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (97.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-966771 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-966771 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m37.457034252s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-966771 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (97.95s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-966771
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-966771-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-966771-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (93.059282ms)

                                                
                                                
-- stdout --
	* [multinode-966771-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21918
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21918-117497/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-117497/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-966771-m02' is duplicated with machine name 'multinode-966771-m02' in profile 'multinode-966771'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-966771-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-966771-m03 --driver=kvm2  --container-runtime=crio: (43.110485129s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-966771
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-966771: exit status 80 (214.009777ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-966771 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-966771-m03 already exists in multinode-966771-m03 profile
	* 
	โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
	โ”‚                                                                                             โ”‚
	โ”‚    * If the above advice does not help, please let us know:                                 โ”‚
	โ”‚      https://github.com/kubernetes/minikube/issues/new/choose                               โ”‚
	โ”‚                                                                                             โ”‚
	โ”‚    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    โ”‚
	โ”‚    * Please also attach the following file to the GitHub issue:                             โ”‚
	โ”‚    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    โ”‚
	โ”‚                                                                                             โ”‚
	โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-966771-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.34s)

                                                
                                    
x
+
TestScheduledStopUnix (113.86s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-025997 --memory=3072 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-025997 --memory=3072 --driver=kvm2  --container-runtime=crio: (42.201723779s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-025997 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1119 23:30:24.282408  154043 out.go:360] Setting OutFile to fd 1 ...
	I1119 23:30:24.282578  154043 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:30:24.282590  154043 out.go:374] Setting ErrFile to fd 2...
	I1119 23:30:24.282594  154043 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:30:24.282819  154043 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-117497/.minikube/bin
	I1119 23:30:24.283112  154043 out.go:368] Setting JSON to false
	I1119 23:30:24.283232  154043 mustload.go:66] Loading cluster: scheduled-stop-025997
	I1119 23:30:24.283620  154043 config.go:182] Loaded profile config "scheduled-stop-025997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:30:24.283695  154043 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/scheduled-stop-025997/config.json ...
	I1119 23:30:24.283936  154043 mustload.go:66] Loading cluster: scheduled-stop-025997
	I1119 23:30:24.284074  154043 config.go:182] Loaded profile config "scheduled-stop-025997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-025997 -n scheduled-stop-025997
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-025997 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1119 23:30:24.575748  154088 out.go:360] Setting OutFile to fd 1 ...
	I1119 23:30:24.576075  154088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:30:24.576086  154088 out.go:374] Setting ErrFile to fd 2...
	I1119 23:30:24.576092  154088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:30:24.576317  154088 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-117497/.minikube/bin
	I1119 23:30:24.576549  154088 out.go:368] Setting JSON to false
	I1119 23:30:24.576755  154088 daemonize_unix.go:73] killing process 154077 as it is an old scheduled stop
	I1119 23:30:24.576893  154088 mustload.go:66] Loading cluster: scheduled-stop-025997
	I1119 23:30:24.577313  154088 config.go:182] Loaded profile config "scheduled-stop-025997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:30:24.577425  154088 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/scheduled-stop-025997/config.json ...
	I1119 23:30:24.577622  154088 mustload.go:66] Loading cluster: scheduled-stop-025997
	I1119 23:30:24.577761  154088 config.go:182] Loaded profile config "scheduled-stop-025997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1119 23:30:24.582067  121369 retry.go:31] will retry after 83.377ยตs: open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/scheduled-stop-025997/pid: no such file or directory
I1119 23:30:24.583243  121369 retry.go:31] will retry after 89.308ยตs: open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/scheduled-stop-025997/pid: no such file or directory
I1119 23:30:24.584389  121369 retry.go:31] will retry after 221.13ยตs: open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/scheduled-stop-025997/pid: no such file or directory
I1119 23:30:24.585528  121369 retry.go:31] will retry after 415.191ยตs: open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/scheduled-stop-025997/pid: no such file or directory
I1119 23:30:24.586673  121369 retry.go:31] will retry after 542.985ยตs: open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/scheduled-stop-025997/pid: no such file or directory
I1119 23:30:24.587814  121369 retry.go:31] will retry after 498.18ยตs: open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/scheduled-stop-025997/pid: no such file or directory
I1119 23:30:24.588937  121369 retry.go:31] will retry after 1.692031ms: open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/scheduled-stop-025997/pid: no such file or directory
I1119 23:30:24.591130  121369 retry.go:31] will retry after 1.51796ms: open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/scheduled-stop-025997/pid: no such file or directory
I1119 23:30:24.593353  121369 retry.go:31] will retry after 1.638754ms: open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/scheduled-stop-025997/pid: no such file or directory
I1119 23:30:24.595561  121369 retry.go:31] will retry after 5.600757ms: open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/scheduled-stop-025997/pid: no such file or directory
I1119 23:30:24.601792  121369 retry.go:31] will retry after 3.266664ms: open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/scheduled-stop-025997/pid: no such file or directory
I1119 23:30:24.606058  121369 retry.go:31] will retry after 9.554658ms: open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/scheduled-stop-025997/pid: no such file or directory
I1119 23:30:24.616368  121369 retry.go:31] will retry after 14.935418ms: open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/scheduled-stop-025997/pid: no such file or directory
I1119 23:30:24.631683  121369 retry.go:31] will retry after 13.193872ms: open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/scheduled-stop-025997/pid: no such file or directory
I1119 23:30:24.646001  121369 retry.go:31] will retry after 20.083118ms: open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/scheduled-stop-025997/pid: no such file or directory
I1119 23:30:24.666218  121369 retry.go:31] will retry after 58.621836ms: open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/scheduled-stop-025997/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-025997 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-025997 -n scheduled-stop-025997
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-025997
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-025997 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1119 23:30:50.293908  154239 out.go:360] Setting OutFile to fd 1 ...
	I1119 23:30:50.294043  154239 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:30:50.294054  154239 out.go:374] Setting ErrFile to fd 2...
	I1119 23:30:50.294060  154239 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 23:30:50.294346  154239 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-117497/.minikube/bin
	I1119 23:30:50.294569  154239 out.go:368] Setting JSON to false
	I1119 23:30:50.294650  154239 mustload.go:66] Loading cluster: scheduled-stop-025997
	I1119 23:30:50.294965  154239 config.go:182] Loaded profile config "scheduled-stop-025997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 23:30:50.295038  154239 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/scheduled-stop-025997/config.json ...
	I1119 23:30:50.295218  154239 mustload.go:66] Loading cluster: scheduled-stop-025997
	I1119 23:30:50.295312  154239 config.go:182] Loaded profile config "scheduled-stop-025997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-025997
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-025997: exit status 7 (61.01439ms)

                                                
                                                
-- stdout --
	scheduled-stop-025997
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-025997 -n scheduled-stop-025997
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-025997 -n scheduled-stop-025997: exit status 7 (60.898165ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-025997" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-025997
--- PASS: TestScheduledStopUnix (113.86s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (155.97s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.694119218 start -p running-upgrade-125926 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.694119218 start -p running-upgrade-125926 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m41.400372626s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-125926 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-125926 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (52.886880837s)
helpers_test.go:175: Cleaning up "running-upgrade-125926" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-125926
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-125926: (1.114908932s)
--- PASS: TestRunningBinaryUpgrade (155.97s)

                                                
                                    
x
+
TestKubernetesUpgrade (237.28s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-414923 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-414923 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m24.145214177s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-414923
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-414923: (2.120270664s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-414923 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-414923 status --format={{.Host}}: exit status 7 (75.856621ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-414923 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-414923 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (54.074993896s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-414923 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-414923 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-414923 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (85.212115ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-414923] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21918
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21918-117497/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-117497/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-414923
	    minikube start -p kubernetes-upgrade-414923 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4149232 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-414923 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-414923 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1119 23:34:08.189163  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-414923 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m35.647673607s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-414923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-414923
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-414923: (1.062914106s)
--- PASS: TestKubernetesUpgrade (237.28s)

                                                
                                    
x
+
TestISOImage/Setup (73.42s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-039657 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-039657 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m13.41851661s)
--- PASS: TestISOImage/Setup (73.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (149.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2412789330 start -p stopped-upgrade-426303 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2412789330 start -p stopped-upgrade-426303 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m21.768725233s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2412789330 -p stopped-upgrade-426303 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2412789330 -p stopped-upgrade-426303 stop: (2.346928145s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-426303 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1119 23:34:25.095346  121369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-117497/.minikube/profiles/addons-638975/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-426303 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m5.176881217s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (149.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-426303
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-426303: (1.422714385s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-111091 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-111091 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (88.607155ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-111091] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21918
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21918-117497/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-117497/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (63.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-111091 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-111091 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m2.716411326s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-111091 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (63.02s)

                                                
                                    
x
+
TestPause/serial/Start (121.72s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-628329 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-628329 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m1.717020435s)
--- PASS: TestPause/serial/Start (121.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (30.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-111091 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-111091 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (28.710239117s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-111091 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-111091 status -o json: exit status 2 (216.210765ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-111091","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-111091
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-111091: (1.200355313s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (30.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (23.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-111091 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-111091 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (23.219477323s)
--- PASS: TestNoKubernetes/serial/Start (23.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21918-117497/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-111091 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-111091 "sudo systemctl is-active --quiet service kubelet": exit status 1 (165.565583ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-111091
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-111091: (1.375158064s)
--- PASS: TestNoKubernetes/serial/Stop (1.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (20.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-111091 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-111091 --driver=kvm2  --container-runtime=crio: (20.620416458s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (20.62s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (49.26s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-628329 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-628329 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (49.226532707s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (49.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-111091 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-111091 "sudo systemctl is-active --quiet service kubelet": exit status 1 (159.585715ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.16s)

                                                
                                    
x
+
TestPause/serial/Pause (0.77s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-628329 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.77s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-628329 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-628329 --output=json --layout=cluster: exit status 2 (237.677457ms)

                                                
                                                
-- stdout --
	{"Name":"pause-628329","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-628329","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.24s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-628329 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.71s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.91s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-628329 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.91s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.85s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-628329 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.85s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.74s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.74s)

                                                
                                    

Test skip (28/190)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.33s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-638975 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.33s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-274272 /tmp/TestFunctionalserialCacheCmdcacheadd_local2827528816/001
functional_test.go:1092: (dbg) Non-zero exit: docker build -t minikube-local-cache-test:functional-274272 /tmp/TestFunctionalserialCacheCmdcacheadd_local2827528816/001: context deadline exceeded (1.75ยตs)
functional_test.go:1094: failed to build docker image, skipping local test: context deadline exceeded
--- SKIP: TestFunctional/serial/CacheCmd/cache/add_local (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard